Zero Point Three times the Forcing

Guest Post by Willis Eschenbach

Now that my blood pressure has returned to normal after responding to Dr. Trenberth, I returned to thinking about my earlier somewhat unsatisfying attempt to make a very simple emulation of the GISS Model E (herinafter GISSE) climate model. I described that attempt here, please see that post for the sources of the datasets used in this exercise.

After some reflection and investigation, I realized that the GISSE model treats all of the forcings equally … except volcanoes. For whatever reason, the GISSE climate model only gives the volcanic forcings about 40% of the weight of the rest of the forcings.

So I took the total forcings, and reduced the volcanic forcing by 60%. Then it was easy, because nothing further was required. It turns out that the GISSE model temperature hindcast is that the temperature change in degrees C will be 30% of the adjusted forcing change in watts per square metre (W/m2). Figure 1 shows that result:

 

Figure 1. GISSE climate model hindcast temperatures, compared with temperatures hindcast using the formula ∆T = 0.3 ∆Q, where T is temperature and Q is the same forcings used by the GISSE model, with the volcanic forcing reduced by 60%.

What are the implications of this curious finding?

First, a necessary detour into black boxes. For the purpose of this exercise, I have treated the GISS-E model as a black box, for which I know only the inputs (forcings) and outputs (hindcast temperatures). It’s like a detective game, trying to emulate what’s happening inside the GISSE black box without being able to see inside.

The resulting emulation can’t tell us what actually is happening inside the black box. For example, the black box may take the input, divide it by four, and then multiply the result by eight and output that number.

Looking at this from the outside of the black box, what we see is that if we input the number 2, the black box outputs the number 4. We input 3 and get 6, we input 5 and we get 10, and so on. So we conclude that the black box multiplies the input by 2.

Of course, the black box is not actually multiplying the input by 2. It is dividing by 4 and multiplying by 8. But from outside the black box that doesn’t matter. It is effectively multiplying the input by 2. We cannot use the emulation to say what is actually happening inside the black box. But we can say that the black box is functionally equivalent to a black box that multiplies by two. The functional equivalence means that we can replace one black box with the other because they give the same result. It also allows us to discover and state what the first black box is effectively doing. Not what it is actually doing, but what it is effectively doing. I will return to this idea of functional equivalence shortly.

METHODS

Let me describe what I have done to get to the conclusions in Figure 1. First, I did a multiple linear regression using all the forcings, to see if the GISSE temperature hindcast could be expressed as a linear combination of the forcing inputs. It can, with an r^2 of 0.95. That’s a good fit.

However, that result is almost certainly subject to “overfitting”, because there are ten individual forcings that make up the total. With so many forcings, you end up with lots of parameters, so you can match most anything. This means that the good fit doesn’t mean a lot.

I looked further, and I saw that the total forcing versus temperature match was excellent except for one forcing — the volcanoes. Experimentation showed that the GISSE climate model is underweighting the volcanic forcings by about 60% from the original value, while the rest of the forcings are given full value.

Then I used the total GISS forcing with the appropriately reduced volcanic contribution, and we have the result shown in Figure 1. Temperature change is 30% of the change in the adjusted forcing. Simple as that. It’s a really, really short methods section because what the GISSE model is effectively doing is really, really simple.

DISCUSSION

Now, what are (and aren’t) the implications available within this interesting finding? What does it mean that regarding temperature, to within an accuracy of five hundredths of a degree (0.05°C RMS error) the GISSE model black box is functionally equivalent to a black box that simply multiplies the adjusted forcing times 0.3?

My first implication would have to be that the almost unbelievable complexity of the Model E, with thousands of gridcells and dozens of atmospheric and oceanic levels simulated, and ice and land and lakes and everything else, all of that complexity masks a correspondingly almost unbelievable simplicity. The modellers really weren’t kidding when they said everything else averages out and all that’s left is radiation and temperature. I don’t think the climate works that way … but their model certainly does.

The second implication is an odd one, and quite important. Consider the fact that their temperature change hindcast (in degrees) is simply 0.3 times the forcing change (in watts per meter squared). But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.

The third implication is that there appears to be surprisingly little lag in their system. I can improve the fit of the above model slightly by adding a lag term based on the change in forcing with time d(Q)/dt. But that only improves the r^2 to 0.95, mainly by clipping the peaks of the volcanic excursions (temperature drops in e.g. 1885, 1964). A more complex lag expression could probably improve that, but with the initial expression having an r^2 of 0.92, that only leaves 0.08 of room for improvement, and some of that is surely random noise.

The fourth implication is that the model slavishly follows the radiative forcings. The model results are a 5-run average, so it is not clear how far an individual model run might stray from the fold. But since the five runs’ temperatures average out so close to 0.3 times the forcings, no individual one of them can be very far from the forcings.

Anyhow, that’s what I get out of the exercise. Further inferences, questions, objections, influences and expansions welcomed, politeness roolz, and please, no speculation about motives. Motives don’t matter.

w.

 

5 2 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

165 Comments
Inline Feedbacks
View all comments
Dr Chaos
January 17, 2011 6:08 am

In a highly non-linear and almost certainly chaotic system like the earth’s atmosphere, this orthodox climate theory can only be seen as baloney. It’s utterly preposterous (and I know next to nothing about climatology!).
How do these guys get away with it? Extraordinary.

pettyfog
January 17, 2011 6:12 am

It seems to me this negation of forcings has all been hinted at before. However, I’m a little stuck on the volcanic forcing.
It seems to me from what you are saying that, at this point in time if there’s another Pinatubo, we’re screwed.

Baa Humbug
January 17, 2011 6:35 am

I’m trying to gather the implications of this. Let me try a summary.
(Going back to fig. 5 in the earlier post “Model Charged With Excessive…) HERE
Seems to me all non-GHG forcings except aerosols and volcanoes are pretty much flat from 1880 thru to 2000. These can effectively be left out of the models.
Well mixed GHGs increase temperatures, however these temperatures don’t match observed data, so they are modulated by aerosols. Primarily by reflective tropospheric aerosols plus aerosol indirect effects, and periodically by volcanic aerosols.
Considering our knowledge of aerosol effects are so limited (I’d even go so far as to say guessed at), these models are EITHER totally worthless for future predictions, OR the much vaunted IPCC sensitivity is really only 1.1DegC per doubling INCLUDING ALL FEEDBACKS.
Surely the modellers are aware of this? if so, they’ve been remarkably silent about it.
Am I close to the mark? Maybe Dr Lacis can set us straight.

Mike Haseler
January 17, 2011 6:40 am

Richard S Courtney says:
“But that is also a statement of the climate sensitivity, 0.3 degrees per W/m2. Converting this to degrees of warming for a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
So the model has a climate sensitivity of 0.3 degrees per W/m2 for all the inputs that have not been tested, apart from the only one that has been tested which at 40% is 0.12C. per W/M2
Wouldn’t the sensible thing be to assume that as the only value which has been validated in any way is the volcano forcing from Pinatubo’s eruption, that the best estimate of the others is the same value.
On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2? Or am I missing something?

January 17, 2011 6:44 am

Climate model predictions just slavishly follow the CO2 curve, here and there patching it with volcanos or ad-hoc aerosols.
http://i55.tinypic.com/14mf04i.jpg

January 17, 2011 6:46 am

I went and had a look at your referenced post after reading this. This is very interesting. What I find remarkable is that when looking at the forcings used (see your other post) the only ones that look they have been measured/modelled are:
Strat-Aer
Solar
W-M_GHGS
StratH2o
All the others look to me like fudge factors – the series shape is identical, they all have essentially a linear form with slope changes at the same points eg circa 1950 and a second slope change circa 1990. These other forcings are all flat after 1990:
SnowAlb
O3
BC
ReflAer
LandUse
and even earlier in the case of LandUse and O3 which appear flat from 1980.
It is quite surprising to me that even with modern satellite data and all the money spent on Global Warming research at an institute called NASA, we cannot map/measure changes due to SnowAlb, BC, and Landuse?
The second rather remarkable observation is that the largest forcings in order of magnitude of the forcings (ignoring sign) appears to be:
Positive: W-M_GHG
Negative: StratAer (ie volcanos, but these have to be used at 40%)
Negative: ReflAer
Negative: AIE
Everything else appears to have very little influence, including that big yellow thing in the sky. Without W-M_GHG in the above mix we would heading for an ice age – except how many of the factors in the list of forcings are considered to be influenced by man? I assume BC is, as is LandUse and that StratAer (volcanos) is not. What about ReflAer and AIE and the others? the reason for this question is what would happen to the model if you removed all the (implied) human influences and looked at just the natural response. How many natural forcings are actually included in the model? Enough to actually model the climate system?

Pamela Gray
January 17, 2011 6:52 am

While I understand the purpose of your experiment, to discover how the black box might work, the entire exercise (their’s and your’s) may be reporting a false positive, beginning with the raw data source, averaged and smoothed as it were, to show a single global temperature. That we can’t model the single raw data number without an added anthropogenic variable expression such as increasing CO2, is not evidence that this is the major forcing. Adding data together, smoothing it, and dealing with an always changing sample temperature set, may be contaminating the whole entire thing. If there ever was an experimental design (taking a smoothed anomaly and trying to model it) potentially overwhelmed with basic design errors, this would be that exercise. The samples are not controlled, the temperature anomaly data is combined without reason, and their model is a swag. Your endeavor could be confirming this.
The raw data needs to be regionalized. Example: As in Artic belt, northern hemispheric belt, tropical belt, southern hemispheric belt, and Antarctic belt. The seasons should stay intact and the air temperature should be actual, not anomalized. This will approximate the areas affected by natural weather pattern variation drivers. This data then needs to be compared with data measuring easterlies, westerlies, SST anomalies within the belts, warm pool migration patterns, and pressure systems, each separately and also in various combinations. The null hypothesis is that there will not be a significant match between observed temperature and observed weather pattern variation drivers. My mind experiment tells me there will be a match on a region by region basis. If we do, and then discover that we don’t see it with the combined data, indicates a statistical contamination brought on by the artificial process used to combine the sets into a single averaged number.
The next step is to model these weather pattern variation parameters to see if one can output something similar to the temperature, region by region. This needs to be done way before anthropogenic CO2 is considered as an additional parameter.
The beauty of this design is that we don’t need a long run of temperature data. The satellite period will do nicely. The fun part will be to reasonably adjust the weather pattern change parameters to see what you can get, temperature wise. We might be able to develop reasonable explanations for mini ice ages.
This is to say, I just don’t see the value in studying bad boxes of any color.

tonyb
Editor
January 17, 2011 7:06 am

Lazy Teenager said
“One school of thought I have noticed around here is that the models are so incredibly complex that they allow the output to be fiddled by hiding cheats amongst the complexity.”
I am more interested in the input. Would you say that wildly inaccurate- or missing- data relating to sea surface temperatures (for example) should then be considered accurate once they have been through the modelling mangle, or are they still the nonsense they started off as?
tonyb

harrywr2
January 17, 2011 7:09 am

It seems to me Hansen admits that the atmospheric response up to the year 2000 was ‘less then we thought’ but falls back on a big chunk of the heat is being stored in the oceans.

dp
January 17, 2011 7:10 am

For this you need computers costing billions? No wonder Piers Corbyn is so effective.

January 17, 2011 8:00 am

I have worked with geophysical interments and geophysicists in mineral exploration for over 40 years. The black box is a standard joke. We have all tested dozens of them and just like the GISSE model go thorough dozens of gyrations that do noting more complex then move the slide and cross hair on my slide rule.

Brian Buerke
January 17, 2011 8:04 am

It’s no surprise that their model gives a climate sensitivity of 0.3. That just means it’s consistent with thermodynamics and the condition that Earth acts as a blackbody.
The sensitivity of any blackbody is deltaT = (T/4Q)deltaQ.
The measured parameters of Earth are T= 287 K and Q = 239 W/m^2, the latter being the absorbed solar radiation. Plugging these in gives T/4Q = 0.3.
The only real mystery is why climate scientists keep insisting that the water vapor feedback will raise the sensitivity by a factor of 3. That expectation does not seem consistent with the physical requirement that Earth act as a blackbody.

Feet2theFire
January 17, 2011 8:19 am

Last year I read about the very early days of climate modeling. (Sorry, I can’t recall the source.) There was one variable that at one point would make the results increase off the chart and they didn’t know what to do about it, because it seemed to be the right values and code. They got in some Japanese whiz and he just fudged the code until it stopped doing that. His code had nothing to do with reality – only to make the curve behave itself. From what I recall reading, they took exception to what he did, but were also relieved that the wild acceleration was tamed.
They ended up accepting it and keeping it in the code – at least for some time after that. It may still be there.
This is the exact opposite of science. Yes, you compare the results with reality. But you don’t just throw in whatever code makes the curve do what you want it to do, not if it is not the mathematical statement of the reality as best you can come up with.

January 17, 2011 8:36 am

Dear Willis,
Nice discovery! Some others have found that too, did write an article about that, but it wasn’t published, because of some resistence of the peer reviewers. See comment #60 at RC from a few years ago:
http://www.realclimate.org/index.php/archives/2005/12/naturally-trendy/comment-page-2/#comments. Unfortunately his analyses was removed from the net.
Moreover, the reduction of 60% for (sensitivity to) the forcing from volcanoes is quite remarkable and sounds like an ad hoc adjustments to fit the temperature curve.
But if they reduce the sensitivity for volcanies, they need to reduce the sensitivity for human aerosols too: the effect on reflection of incoming sunlight is equal for volcanic SO2 as good as for human SO2. The difference is mainly in the residence time: human aerosols last average 4 days before rained out, while volcanic aerosols may last several years before dropping out of the stratosphere.
But the models need the human aerosols with a huge influence, or they can’t explain the 1945-1975 cool(er) period. The influence of aerosols and the sensitivity for GHGs is balanced: a huge influence of aerosols means a huge influence of GHGs and vv. See RC of some years ago:
http://www.realclimate.org/index.php/archives/2005/07/climate-sensitivity-and-aerosol-forcings/
with my comment at #14.
BTW, splendid response to Dr. Trenberth!

Rex
January 17, 2011 8:38 am

Following on from Pamela Gray’s comment: I know little about climatology, but
have been a Market Research practitioner for forty years, and have some knowledge
of means, statistical congruences, and what have you, and have been appalled for
decades at the bilgewater emanating from certain scientific communities (mainly medical ones), attempting to prove ‘links’ etc, and assuming cause-and-effect
relationships when there are none – most statistically significant linkages assumed to
be meaningful are actually happenstance.
Which leads to my question; does this single-figure “mean global annual temperature”
have any connection at all with “the real world” … or is it just a statistical artifact which diguises significant regional and temporal variations?
(Apart from all the fiddling that goes on.)

Warren in Minnesota
January 17, 2011 8:58 am

What bothers me most is that someone-as in you, Willis-has to try to reverse engineer and determine what the model is doing. It would be much easier and better if the modelers would simply make their work public.

JFD
January 17, 2011 9:09 am

Willis, you are good — really good, thanks for you do, mon ami. It seems to me that the GISS folks have misused superposition in their model. My understanding of superposition is that one must fully understand each of the variables inside of the black box. The 10 variables used by GISS are not known too well as pointed out by ThinkingScientist above at 6.46am. Even partial knowledge of the 10 may still be okay but what about the variables that GISS have not included? By not including all of the variables that impact climate in the black box, GISS have forced all of the missing forcings to be de facto included in the greenhouse gases variable i.e. carbon dioxide in the look back curve fit. This may result in a hindcast curve fit since superposition allows variables i.e. forcings to be added together, but it in no way allows a forecast to be made with the model. Since carbon dioxide in the atmosphere is increasing, the model will always show warming.
What variables are missing from the black box: ocean currents exchanging heat between the equatorial areas and the polar areas, impact of sun variations on ocean currents, variations of ocean current locations, produced ground water from slow to recharge aquifers, water aerosols from evaporative cooling towers, impact of new water and aerosols added to atmosphere on clouds, decreasing humidity in the Troposphere since 1948, perhaps the mathematical sign of clouds impacts and probably several others.
By using superposition without having all of the variables that impact the climate in the black box, the GISS model overstates the impact of carbon dioxide by the summation of all of the missing forcings.

Richard S Courtney
January 17, 2011 9:20 am

Mike Haseler, Lance Wallace, Baa Humbug, and Ferdinand Engelbeen:
At January 17, 2011 at 6:40 am Lance Wallace asks me:
“So the model has a climate sensitivity of 0.3 degrees per W/m2 for all the inputs that have not been tested, apart from the only one that has been tested which at 40% is 0.12C. per W/M2
Wouldn’t the sensible thing be to assume that as the only value which has been validated in any way is the volcano forcing from Pinatubo’s eruption, that the best estimate of the others is the same value.
On this basis isn’t the best estimate based on the climategate team’s own model, a prediction that expected warming is 0.44C for a doubling of CO2?”
I answer, on the basis of Willis Eschenbach’s analysis, the answer is, yes.
And that is why I asked Willis (above at January 17, 2011 at 5:04 am ):
“Please write it up for publication in a journal so there can be no justifiable reason for the next IPCC Report to ignore it.”
And it is why I eagerly await Willis’ answer to the posts from Lance Wallace at January 17, 2011 at 4:55 am and at January 17, 2011 at 5:20 am.
This matter is far, far too important for it to have any doubt attached to its formal presentation.
Baa Humbug says (at January 17, 2011 at 6:35 am):
“Well mixed GHGs increase temperatures, however these temperatures don’t match observed data, so they are modulated by aerosols. Primarily by reflective tropospheric aerosols plus aerosol indirect effects, and periodically by volcanic aerosols.
Considering our knowledge of aerosol effects are so limited (I’d even go so far as to say guessed at), these models are EITHER totally worthless for future predictions, OR the much vaunted IPCC sensitivity is really only 1.1DegC per doubling INCLUDING ALL FEEDBACKS.”
Indeed, that is so as I have repeatedly explained in several places including WUWT, and Ferdinand Engelbeen makes the same argument again at January 17, 2011 at 8:36 am where he writes:
“But the models need the human aerosols with a huge influence, or they can’t explain the 1945-1975 cool(er) period. The influence of aerosols and the sensitivity for GHGs is balanced: a huge influence of aerosols means a huge influence of GHGs and vv. See RC of some years ago:
http://www.realclimate.org/index.php/archives/2005/07/climate-sensitivity-and-aerosol-forcings/
with my comment at #14.”
This, again, is why Willis’ analysis is so very important that it needs solidifying such that points similar to those of Lance Wallace are addressed and then it needs to be published in a form that prevents the IPCC from merely ignoring it without challenge.
Richard

Ron Cram
January 17, 2011 9:27 am

Willis,
Another excellent post. Model E is a black box because ALL of the documentation normally required and normally provided is not available. This failure to provide documentation is another example of a failure to archive and/or release data and methods required by the scientific method.

January 17, 2011 9:29 am

“a doubling of CO2 gives us (0.3°C per W/m2) times (3.7 W/m2 per doubling of CO2), which yields a climate sensitivity of 1.1°C for a doubling of CO2. This is far below the canonical value given by the GISSE modelers, which is about 0.8°C per W/m2 or about 3°C per doubling.”
In the model there must be a feedback to the water vapour that increases as the total CO2 increases, a value specific to a higher amount. Although this contradicts the concept that the forcing decreases for CO2 content, as the atmosphere is saturated/the IR is all used up, it is what would make sense to get an increased sensitivity by 780 ppm CO2.

Paddy
January 17, 2011 9:54 am

C1UE: You said: “I would just note, however, that modeling isn’t a case of all right or all wrong.”
Isn’t a little bit correct or a little bit wrong the same as being a little bit pregnant?

Roy Clark
January 17, 2011 10:19 am

Thank you Willis for an excellent post.
The whole concept of radiative forcing is empirical pseudoscience, or climate astrology. The fundamental assumption is that long term averages of ‘surface temperature’ and ‘radiative flux’ are somehow in equlibrium and can be analyzed using perturbation theory to ‘predict’ a ‘forcing’ from an increase in atmospheric CO2 concentration and other ‘greenhouse gases’ and ‘aerosols’.
There is no long term record of the real surface temperature, meaning the temperature of the ground under our bare feet. Instead, the meteorological surface air temperature (MSAT) record has been substitituted for the real surface temperature. The MSAT is the air temperature in an enclosure at eye level. 1.5 to 2 m above the ground. This follows the ocean temperatures and local heat island effects, and has been ‘adjusted’ [upwards] a few too may times.
Radiative forcing appears to be an elegant mathematical theory, but it is incapable of predicting its way out of a wet paper bag.
Independent (and reliable) radiative transfer calcualtions based on HITRAN show that a 100 ppm increase in atmospheric CO2 concentration from 280 to 380 ppm produces an increase in downward flux of about 1.7 W.m-2 in the downward atmospheric LWIR flux. The hockey stick – yes the hockey stick – shows an increase of 1 C in the MSAT average during the time the CO2 concentration increased. Therefore, a 1 W.m-2 increase in downward LWIR flux from any greenhouse gas must produce a 0.67 C [1/1.7] increase in ‘average surface temperature’ by Royal Decree from the Climate Gods of Radiative Forcing. This is the basis of the radiative forcing constants used in the IPCC models – climate astrology. The forcing constants for the IR gases are fixed by the spectroscopic properties of those gases and the CO2 ‘calibration constant’, so the only fix left is to manipilate the aerosol forcing, which is a reduction in solar illumination, not an increase in LWIR flux.
Radiative forcing was introduced into climate analysis in the mid 1960’s, before satellites and supercomputers and should have been rejected as invalid soon after. Instead, it has become enshrined as one of the major pillars of the global warming altar.
The underlying issue is that the change in surface flux from CO2 has to be added to the total surface flux BEFORE the surface temperature is calculated. The solar flux is zero at night and up to 1000 W.m-2 during the day. The net LWIR flux varies from 0 to 100 W.m-2 at night and up to ~ 250 W.m-2 during the day. About 80% of the surface cooling flux during the day is moist convection, so the there is no surface temperature radiative equilibrium on any time scale. The heat capacity and thermal capacity of the surface have to be included as well, and the latent heat …
Willis has done a good job in showing that the radiative forcing is used empirically to ‘fix’ the surface temperature. There is no physics involved, just a little empirical pseudoscience and some meaningless mathematical manipulations of the flux equations.
Reality is that there is no climate sensitivity to CO2. A doubling of the CO2 concentration will have no effect on the Earth’s climate. How many angels can fit on the head of a pin when the CO2 concentration is doubled?
Time for Trenberth, Hansen, Solomon etc. to explain their climate fraud to a Federal Grand Jury. As taxpayers we should get our money back.
For more on surface temperature see:
http://hidethedecline.eu/pages/posts/what-surface-temperature-is-your-model-really-predicting-190.php

Bob Koss
January 17, 2011 10:38 am

Mount Hudson in Patagonia erupted two months after Pinatubo. It didn’t get noticed much due to its remote location, but it was about the same size as Mount St. Helens in the early 80’s and had the same VEI 5 rating. Pinatubo was a VEI 6.
That leads to wondering how the models handle multiple eruptions close in time, but separated widely by distance.

NicL_UK
January 17, 2011 11:12 am

Ferdinand Engelbeen commented that:
“Some others have found that too, did write an article about that, but it wasn’t published, because of some resistence of the peer reviewers. … Unfortunately his analyses was removed from the net.”
A version of the paper referred to, “A statistical evaluation of GCMs: Modeling the Temporal Relation between Radiative Forcing and Global Surface Temperature” by Kaufmann and Stern, is available at:
http://replay.waybackmachine.org/20070203081607/http://www.bu.edu/cees/people/faculty/kaufmann/documents/Model-temporal-relation.pdf

January 17, 2011 11:20 am

Bob Koss says:
January 17, 2011 at 10:38 am
That leads to wondering how the models handle multiple eruptions close in time, but separated widely by distance.
Depends how much aerosols reach the stratosphere. The Mt. St. Helens blast was mostly sidewards and not much did reach the stratosphere. The Mount Hudson was more directly injecting into the stratosphere and added about 10% of the total amount of SO2 as the Pinatubo (the VEI index is a logarithmic scale). This was probably added to the total volcanic aerosols load of the stratosphere in the models.
The distance to the equator matters somewhat, but the bulk of the SO2/aerosol levels of the Pinatubo was spread all over the stratosphere within weeks, thus the distance between the volcanoes doesn’t matter much.

Verified by MonsterInsights