From Dr Judith Curry’s Climate Etc.
Posted on December 27, 2019 by curryja |
An alternative perspective on 3 degrees C?
This post was originally intended as a short comment questioning certain aspects of the methodology in JC’s post of December 23, “3 degrees C?”. But every methodology is bound to have shortcomings, raising the possibility that Judith’s methodology might nevertheless be best possible, those shortcomings notwithstanding. I was finding my arguments for a better methodology getting too long for a mere comment, whence this post. (But if actual code is more to your fancy than long-winded natural language explanations, Figures 1 and 2a can be plotted with only 31 MATLAB commands .)
Judith’s starting point is “It is far simpler to bypass the attribution issues of 20th century warming, and start with an early 21st century baseline period — I suggest 2000-2014, between the two large El Nino events.” The tacit premise here would appear to be that those “attribution issues of 20th century warming” are harder to analyze than their 21st century counterparts.
The main basis for this premise seems to be the rate of climb of atmospheric CO2 this century. This is clearly much higher than in the 20th century and therefore should improve the signal-to-noise ratio when the signal is understood as the influence of CO2 and the noise consists of those pesky “attribution issues”. Having used this SNR argument myself in this forum a few years ago, I can appreciate its logic.
Judith also claimed that “The public looks at the 3 C number and thinks it is 3 C more warming from NOW, not since the late 19th century. Warming from NOW is what people care about.” Having seen no evidence either for this or its contrary, I propose clarifying any such forecast by simply prepending “more” to “degrees” (as in my title) and following Judith’s suggestion to subtract 1, or something more or less equivalent.
Proposal
So what would be an “obviously alternative” methodology? Well, the most extreme alternative I can think of to 15 years of data would be to take the whole 168 years of global annual HadCRUT4 to 2017.
The data for 1850-1900 is certainly sparser than for its sequel. What that does not address however is the extent to which that sparseness compromises the final analysis. By including that data instead of dismissing it out of hand, we may have a better chance of understanding that extent.
Besides increasing by an order of magnitude the duration of the data constraining the priors, another modification we can make is to the target. Instead of taking the goal to be estimating climate for 2100, perhaps plus or minus a few years, I suggest estimating an average, suitably weighted, over the 75 years 2063-2137.
This widening of the window has the effect of trading off precision in time for precision in temperature, as a sort of climate counterpart to the uncertainty principle in quantum mechanics. More generally this is a tradeoff universal to the statistics of time series: the variance of the estimate of the mean tends to be inversely proportional to the sample length.
This wide a window has the further benefit of averaging out much of the bothersome Atlantic Multidecadal Oscillation. And its considerable width also averages out all the faster periodic and quasiperiodic contributors to global land-sea surface temperature such as ENSO, the 11-year solar cycle, the 21-year magnetic Hale cycle, the ongoing pulses from typical volcanoes, etc.
But of what use is a prediction for 2063-2137 if we can’t use it to predict say the extent of sea ice in the year 2100? Well, if we can show at least that the average over that or any other period is highly likely to lie within a certain range, it becomes reasonable to infer that roughly half the years in that period are lower and half are higher. So even though we can’t say which years those would be, we can at least expect some colder years and some warmer years, relative to the average over that period. Those warmer years would then be the ones of greatest concern.
A 75-year moving average of HadCRUT4 would be the straightforward thing to do. Instead I propose applying two moving averages consecutively (the order is immaterial), of respectively 11 and 65 years, and then centering. This is numerically equivalent to a wavelet transform that convolves HadCRUT4 with a symmetric trapezoidal wavelet of width 75 years at the bottom and 55 years at the top. The description in terms of the composition of two moving averages makes it clearer that this particular wavelet is targeting the AMO and the solar cycle for near-complete removal. After much experimenting with alternative purpose-designed convolution kernels as wavelets I settled on this one as offering a satisfactory tradeoff between simplicity of description, effectiveness of overall noise suppression, transparency of purpose, and width—a finite impulse response filter much wider than 75 years doesn’t leave much signal when there’s only 170 years of data. Call climate thus filtered centennial climate.
The point of centering is to align plots vertically, without which they may find themselves uselessly far apart. The centering function we use is c(d) = d – mean(d). This function merely subtracts the mean of the data d from d itself in order to make mean(c(d)) = 0. Hence c(c(d)) = c(d) (c is idempotent).
Lastly I propose 1.85 °C per doubling of CO2 as a proxy for HadCRUT4’s immediate transient climate response to all anthropogenic radiative forcings, ARFs, since 1850. This proxy is reconstructed from ice cores at the Law Dome site in the Australian Antarctic Territory up to 1960 and as measured more directly at Charles Keeling’s CO2 observatory on Mauna Loa thereafter, giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing. The proof is in the pudding: it seems to work.
Results
Applying our centennial filter to HadCRUT4 yields the blue curve in Figure 1, while applying it to ARF (anthropogenic radiative forcing as estimated by our proxy) yields the red curve.
The two plots ostensibly covering the 30-year period 1951-1980 actually use data from the 104-year period 1914-2017; e.g. the datapoint at 1980 is a weighted average of data for the 75 years 1943-2017 while that at 1951 similarly averages 1914-1988. In this way all the data from 1850 to 2017 is used.
During 1951-1980 and 1895-1915 the two curves are essentially parallel, justifying the value 1.85 for recent and early transient climate response. But what of the relatively poor fit during 1915-1950?
Explaining early 20th century
We could explain the departure from parallel during 1910-1950 as simply an underestimate of TCR. However the distribution of CO2 absorption lines suggests that TCR should remain fairly constant over a wide range of CO2 levels. An explanation accommodating that point might be that the Sun was warming during the first half of the century.
To see if that makes sense we could plot the residual of the above figure against solar irradiance. While there used to be several reconstructions of total solar irradiance prior to satellite-based measurements, I’m only aware of two these days, due to respectively Greg Kopp (a frequent collaborator with Judith Lean) and Leif Svalgaard. Both are based on several centuries of sunspot data collected since Galileo started recording them, along with other proxies. The following comparison uses Kopp’s reconstruction.
It would appear that the departure from parallel in the middle of Figure 1 can be attributed almost entirely to solar forcing SF defined as centennial solar sensitivity times absorbed solar irradiance (ASI) as a fraction of total solar irradiance TSI received at top of atmosphere (TOA). The albedo (taken here to be 0.3) is the part of TSI reflected back to space as shortwave radiation. The remaining 70% is the portion absorbed by Earth. This is then averaged over Earth’s surface, which at 4πr² is four times the cross section πr² of the intercepted solar irradiance at TOA, whence the division by 4. That is, ASI = (1 – Albedo)*TSI/4. Lastly ASI (in W/m2) is converted to solar forcing SF (in °C) by multiplying by centennial solar sensitivity CSS (1.35 °C per W/m2 as estimated by Kopp’s reconstruction).
It is almost impossible to evaluate the goodness of this fit by looking just at Figure 1 and the red curve in Figure 2a. The residual (blue curve in 2a) needs to be plotted, and then juxtaposed with the red curve.
Any fit this good implies a high likelihood of four things.
- The figure of 1.85 for TCR holds not only on the right and left but the middle as well.
- CO2 is a good proxy for all centennial anthropogenic radiative forcing including aerosols.
- The filter removes essentially everything except HadCRUT4, ARF, and solar irradiance.
- The peak-to-peak influence on GMST of the evident 130-year oscillation in TSI is 0.07*5/3 = 0.12 °C. (The centennial filter attenuates the 130-year oscillation to 3/5 of its amplitude, compensated for by multiplying by 5/3 to estimate the actual amplitude.) Not only is the Sun not a big deal for climate, that 130-year oscillation makes its influence predictable several decades into the future.
As a check on Kopp’s reconstruction we can carry out the same comparison based on Leif Svalgaard’s reconstruction, leaving TCR and the residual completely unchanged.
On the one hand Svalgaard’s reconstruction appears to have assigned weights to sunspots of only 70% those of Kopp, requiring a significantly larger solar sensitivity (1.95) to bring it into agreement with the residual. On the other hand the standard deviation of the residual for Figure 2b (GMST – ARF – SF) is 2.3 mK while that for 2a is 3.7 mK, which is interesting.
Both fits are achieved with TCR fixed at 1.85. We were able to find a tiny improvement by using 1.84 for one and 1.86 for the other, but this reduced the standard deviations of the residuals for Figures 2a and 2b by only microkelvins, demonstrating the robustness of 1.85 ° C per doubling of CO2 as an ARF proxy.
The MATLAB script producing figures 1 and 2a,b from data sourced solely from the web at every run is in the file curry.m at http://clim8.stanford.edu/MATLAB/ClimEtc/.
I would be very interested in any software providing comparably transparent and compelling evidence for a substantially different TCR from 1.85, based on the whole of 1850-2017, and independent of any estimates of AMO and other faster-moving “attribution issues”.
Projection to 2063-2137
Regarding Is RCP8.5 an impossible scenario?, I prefer to think of it as a highly unlikely scenario. Not because Big Oil is on the verge of exhausting its proven reserves however, but because of its strange compound annual growth rate when computed in MATLAB as diff(rcp)./rcp(1:end-1)*100.
If that had been a stock market forecast one would suspect insider trading: something is going to happen around 2065 that will cause an abrupt reversal of climbing CAGR when it hits 1.2%, but the lips of the RCP8.5 community are sealed as to what it will be. Or perhaps 2060 is when their in-house psychologists are predicting a popular revolution against Big Oil.
Well, whatever. RCP8.5 is just too implausible to be believed.
Is any projection of rising CO2 plausible? Let me make an argument for the following projection.
Define anthropogenic CO2, ACO2, as excess atmospheric CO2 above 280 ppm. The following graph plots log₂(ACO2) since 1970. We can think of log₂(ACO2) as the number of doublings since ACO2 was 1. However the ±5 ppm variability in CO2 over its thousand-year history makes ACO2=1 a rather virtual notion.
ACO2 was pretty straight during the past century, but has gotten even straighter this century. It reveals a compound annual growth rate of just over 2%.
What could explain its increasing straightness?
One explanation might be that 2% is what the fossil fuel industry’s business model requires for its survival.
Will this continue?
The argument against is based on speculations about supply: the proven reserves can’t maintain 2% growth for much longer, the best efforts of the fossil fuel industry notwithstanding.
The argument for is based on speculations about demand: even if some customers stop buying fossil fuels for some reason, there will be no shortage of other customers willing to take their place, thereby maintaining sufficient demand to justify the oil companies spending whatever it takes to maintain proven reserves at the requisite level for good customer service, at least to the end of the present century. Proven reserves have been growing throughout the 20th century and on into this one, and speculation that this growth is about to end is just that: pure speculation with no basis in fact. The date for peak oil is advancing at about the same pace as the data for fusion energy break-even.
There is a really simple way to see which argument wins. Just keep monitoring log2(ACO2) while looking for a departure from the remarkably straight trend to date. Any significant departure would signal failure to continue and the argument against wins. But if by 2100 no such departure has been seen, the argument for wins, though few if any adults alive today will live to see it.
Today CO2 is at about 410 ppm, making ACO2 130 ppm. If the straight line continues, that is, if ACO2 continues to double every 34 years, two more doublings (multiplication of 130 by 4) bring the date to 2019 + 34*2 = 2087 and the CO2 level in 2087 to 130*4 + 280 = 800 ppm. Another 13 years is another factor of 2^(13/34) = 1.3, making the CO2 in 2100 130*4*1.3 + 280 = 956 ppm.
If the 1.85 °C per doubling of CO2 that has held up for 168 years continues for another 80 years, then we could expect a further rise in CO2 from today’s 410 ppm to 956 ppm to be accompanied by a rise in global mean surface temperature (land and sea together) of 1.85*log₂(956/410) = 2.26 °C.
Per decade, this comes to an average of 2.26/8 = 0.28 °C (0.51 °F) per decade. This is merely an average over those 80 years: some decades will rise more than that, some less.
But what if Figure 4 bends down sooner?
I have no idea. My confidence in what will happen if it remains straight is far greater than any confidence I could muster about the effect of it bending down.
For a more mathematical answer, bending down would break analyticity, and all bets would then be off.
A real- or complex-valued function on a given domain is said to be analytic when it is representable over its whole domain as a Taylor series that converges on that domain. In order for it to remain analytic on any extension of its domain it must continue to use the same Taylor series, which must furthermore remain convergent on that larger domain. Hence any analytic extension of an analytic function to a larger domain, if it exists, is uniquely determined by its Taylor series. This is the basis for the rich subject of analytic_continuation. Functions like addition, multiplication, exponentiation, and their inverses (subtraction, division, logarithm) where defined, all preserve analyticity.
Figure 4’s curve is analytic when modeled as a straight line. This would no longer remain the case if it started to bend down significantly
The essential contributors to centennial climate since 1850 look sufficiently like analytic functions as to raise concern when CO2 as its strongest contributor ceases to rise analytically. In particular drawdown by vegetation seems likely to respond analytically if we ignore the impact of land use changes governed by one of the planet’s more chaotic species.
So what does all this mathematical gobbledygook mean in practice? Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.
But presumably their growth has been following CO2’s analytic growth according to some analytic function. The problem is that we know too little about that dependence to say what plants would do if our CO2 stopped growing analytically.
Le Chatelier’s principle on the other hand entails a sufficiently simple dependence that we can expect a decrease in CO2 to result in a matching decrease in drawdown attributable to chemical processes. The much greater complexity of plants is what makes their contribution the biggest unknown here. In particular if the vegetable kingdom continued to grow at only a little less than its present pace until CO2 was down to say 330 ppm, its increasing drawdown could greatly accelerate removal of CO2 from the atmosphere.
But this is only one possibility from a wide range of such possibilities.
On the assumption that Figure 4 stays straight through 2100, and Earth doesn’t get hit in the meantime by something much worse than anything since 1850 such as a supervolcano or asteroid, I feel pretty comfortable with my “Two more degrees” forecast for the 75 years 2063-2137.
But if it bends down I would not feel comfortable making any prediction at all given the above concerns. (I made essentially this point in column 4 of my poster at AGU2018 in Washington DC, “Sources of Variation in Climate Sensitivity Estimates”, http://clim8.stanford.edu/AGU/ .)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





Are amphipods being counted as “plants”?
—————
[Some commenter whose name I didn’t save:]
This recent observation that C14 (C14 that was created by the atomic bomb testing) is making to the deepest ocean with no delay is an observational fact that disproves the CAGW team created absurdly non-physical so-called IPCC Bern model of CO2 sinks and sources and resident times.
The Bern model assumes that ocean circulation (with hundreds of years delay) is the only method for deep sequestration of CO2 in the ocean.
“The alleged long lifetime of 500 years for carbon diffusing to the deep ocean is of no relevance to the debate on the fate of anthropogenic CO2 and the “Greenhouse Effect”, because POC (particular organic carbon; carbon pool of about 1000 giga-tonnes; some 130% of the atmospheric carbon pool) can sink to the bottom of the ocean in less than a year (Toggweiler, 1990). ”
https://www.livescience.com/65466-bomb-carbon-deepest-ocean-trenches.html
============
Bomb C14 Found in Ocean Deepest Trenches
‘Bomb Carbon’ from Cold War Nuclear Tests Found in the Ocean’s Deepest Trenches
Bottom feeders
Organic matter in the amphipods’ guts held carbon-14, but the carbon-14 levels in the amphipods’ bodies were much higher. Over time, a diet rich in carbon-14 likely flooded the amphipods’ tissues with bomb carbon, the scientists concluded.
Ocean circulation alone would take centuries to carry bomb carbon to the deep sea.
But thanks to the ocean food chain, bomb carbon arrived at the seafloor far sooner than expected, lead study author Ning Wang, a geochemist at the Chinese Academy of Sciences in Guangzhou, said in a statement.
Thank you, Roger Knights, for this information.
It appears that the quote (beginning “The alleged long lifetime of 500 years…”) is from:
Segalstad, Tom V. (1998). Carbon cycle modelling and the residence time of natural and anthropogenic atmospheric CO2: on the construction of the “Greenhouse Effect Global Warming” dogma. In Bate, R. (Ed.): “Global Warming: The Continuing Debate“, European Science and Environment Forum (ESEF), Cambridge, England (ISBN 0952773422), pages 184-219, 1998.
This is the (very interesting!) Toggweiler 1990 paper to which Segalstad referred:
Toggweiler, J R., 1990: Bombs and ocean carbon cycles. Nature, 347, 122-123.
I’m sorry but while this analysis looks good it has one basic flaw. The same basic flaw most all climate studies seem to have.
Most temperature readings in the 20th century, and especially back into the 1800’s, simply do not provide more than a units digit in precision, plus/minus 0.5deg. Yet it seems endemic in most of these type studies to try and project this into a precision out to the tenth or hundredth of a degree.
You simply cannot average physical readings of temperature and gain significant digits. Doing so violates all the rules for analyzing physical data. I know that many mathematicians and statisticians think you *can* do this but it is just becomes “data in, garbage out” when you do so. It’s mathematical trickery and sleight of hand.
How do you get an anomaly of 0.1deg when you can only read to the nearest half degree? The entire Fig 1 data, from -0.2deg to +0.3 degree is indistinguishable physically for most of the 20th century. You truly do not know where in that interval the true value of the temperature actually lies. The entire curve could be flat or even descending, you simply don’t know. And no amount of averaging and or even calculating the average out to 100 digits will not change that fact.
I implore each and every reader of this forum to go study up on the rules for significant digits. They were drummed into me by my engineering professors 50 years ago. Do they not teach this any more?
Nope. Error, sources of error as systematic or random, propagation of error all appear to be unknown concepts to the kids I meet coming out of college.
…giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing. The proof is in the pudding: it seems to work.
Since CO2 follows SST, there is no point to such an obvious curve-fitting exercise. CO2 doesn’t add or store heat to the climate.
The entire exercise reproduces built-in bias towards CO2 warming, hence it’s circular thinking.
God awful ludicrous article. Ardent luke warmers appear to have one thing in common with the thermageddon clisci set. They display and employ mathematical knowledge (magnified by Matlab and Excel) brutishly, excessively, confidently and inappropriately.
Most now are aware of the concept of garbage in, garbage out concerning computer analysis, though, in MS climate science there still is a devotional reverence for the “findings” and “experimental results” based on ‘data’ that has itself been tortured by mathematics and bias.
Mathematics, too, will do your bidding and give you answers that have no inderpinning in reality, like those in the above article.
Unintelligently used statistics is the sixteen pound hammer of mainstream climate science and one reason why not one of the hundreds of predictions of climate disaster has come to pass over the 40 yrs of strident forecasting.
Judith Curry is a courageous, honorable scientist who suffered the consequences of diverging from the mainstream Climate Synod. But even she is a 100% certain that it can’t be a cooler world by 2100.
In regard to prediction for 2063-2137 or average temperature by 2100 AD, one thing is for certain, we will still be in an Ice Age.
We have been in a Ice Age for millions of year, and in the last million years, has been coolest times in this Ice Age.
A Ice Age is also called an Icehouse climate, but I prefer to called an icebox climate. But anyhow no one can argue that we not in an Ice Age, though one argue about whether we living in the coolest Ice Age that has ever existed on Earth.
For instance, many claim that Earth endured global climate called Snowball Earth, and it’s argued that were snowball climates a few or several hundred million years ago.
An Ice Age is defined by having polar ice caps/ice sheets and a cold ocean. And of course we have ice caps and a cold ocean- and we in the interglacial period of this millions of years of an Ice Age.
And entire ocean average temperature has been in range of about 1 to 5 C during these millions of years. And if the ocean were to get warmer than 5 C, one might argue whether the ocean is still cold enough to count as being in an icebox climate.
It seems it be hard to argue if one had 5 C ocean and didn’t have polar ice caps, that we were still in an Ice Age. Or an Ice Age is defined by both these factors.
But it seems one have ocean which was 5 C, and might take while to melt the polar ice caps. And it’s possible that one ice cap might melt and the other {probably Antarctica[ could remain quite cold and frozen.
But it seems if one ice cap melted and the ocean was 5 C or warmer, that one could easily say that we leaving the Ice Age {and perhaps centuries or thousands of years later determine, that instead of leaving we were returning to the Ice Age.
Now, we don’t define an Ice Age by the amount of CO2 in our atmosphere, but in our millions of years of our Ice Age, the CO2 levels have been {and still are} at dramatically low levels.
One might imagine that the low levels of CO2 was factor causing us to be in an Ice Age, but most people would say the Ice Age caused CO2 levels to be lower, and what caused the Ice Age was geological changes of the Earth surface, such plate tectonic movement making Antarctica be at the south pole. But also many other types geological changes.
In terms of warming within 2063-2137 time period or by 2100, it seems the top portion of the ocean, say top 1000 meters, needs to warm up a bit. And if this significant portion of the ocean warms up, the result will increase in global air temperature. And over last couple decade, we measured this part of the ocean and it is warming. Though I will note there are “studies” indicating some deeper portions of the oceans have been cooling.
Our entire ocean average temperature is currently about 3.5 C. And it is commonly said that 90% of the oceans is about 3 C or colder.
Despite some deeper water cooling, it seems to me that over last century or so, the entire ocean average temperature has been increasing.
Anyhow, it’s said that about 90% of “global warming” is heating the entire ocean.
And if this was not the reality of how earth temperatures increase, but instead all the “warming” were to somehow only heat the atmosphere, the world would be quite hot.
Now that is very twisted way to look at it, it not the way the world works.
Global warming is basically mixing the heat of the ocean waters- less this happens, the less global warming.
Or we have cold water falling in polar regions, if not warming the ocean enough to at least balance the ocean cooling, the world’s average temperature gets colder.
Again I don’t understand all the math and science Mr. Pratt has provided us. I’m sure it makes some sense to many of the readers here at WUWT.
My problem is that neither him or Dr. Curry address the important issue at hand. CAGW. Catastrophic Anthropogenic Global Warming. The alarmists have been screeching at us for a couple of decades that we are all doomed if the average global temperature rises more than 2 degrees centigrade from what it was in pre industrial times. That is what is important, is the earth and mankind doomed to catastrophe if the average global temperature rises .5 or.9 centigrade in the next eight decades. If we are not all doomed then what is all the fuss about?
Considering that we were coming out of the little ice age in pre industrial times I just don’t see the problem. Will we hit the 2 degree increase? I’m sure we will and I am also sure that the earth will be fine and so will we.
–Considering that we were coming out of the little ice age in pre industrial times I just don’t see the problem. Will we hit the 2 degree increase? I’m sure we will and I am also sure that the earth will be fine and so will we.–
I think over last hundred years, the world has been recovering from the Little Ice Age. And over last 100 year global temperature has increased by about 1 C.
And currently, I think the global average temperature is about 15 C and during the Little Ice Age, the global average temperature was closer to 14 C.
But an aspect of global average temperature is that Southern hemisphere is about 1 C cooler than Northern Hemisphere and possible this imbalance will continue to 2100 AD. Or this imbalance has known about for more than hundreds and I know of no reason it will change one way or the other in next 80 years. Also many argue the Little Ice Age was mostly cooling of the Northern Hemisphere { I don’t think it was only cooling of Northern Hemisphere, but maybe mostly cooling of Northern Hemisphere.
I think it’s possible that by 2100 that the global average temperature could be 16 C though also possible it remains around 15 C. And also possible there a slightly larger or smaller imbalance of the hemispheres.
Now, it seems I would prefer .5 C of global warming rather than .5 C of global cooling, and seems to warming of last several decades has been a good thing, and I don’t think it’s likely global temperature will lower by .5 C before 2100 AD, it seems as unlikely global temperature will be more than 16 C or global temperature will increase more than 1 C before 2100.
But global average temperature is mostly the average temperature of the surface of the entire ocean, and that average is about 17 C. Or the tropical ocean is about 26 C and the rest of the ocean average temperature is about 11 C. And to increase the average global temperature by 1 C is mostly about warming the 60% of ocean outside of the tropics or increase the 11 C average temperature. And 11 C or even 15 C is not very warm.
Now ocean average surface is about 17 C and average land surface is about 10 C. And add those together and get about 15 C for global average temperature.
I think there is good change the arctic ocean become ice free or very close to ice free in the arctic summer by the year 2100 AD. I don’t think that would a serious problem, but it could the most dramatic climate change that occurs.
But seems we could more sea ice, rather than less, and that would worse than ice free arctic {probably, kill polar bears, which are currently booming in population}
The point is I just don’t see a catastrophe coming our way. If we aren’t facing a catastrophe what on earth is all the fuss about?
Heres the trick. You can put radiated energy through the surface of water but not physical heat. You need a bucket of water and a heat gun. Try it for yourself.
In periods of low solar activity, ionization of the lower stratosphere increases, which leads to an increase in the temperature of the lower stratosphere in high latitudes and cooling of the surface in winter at medium latitudes.
“Carbon-14 is produced in the upper layers of the troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons.
The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 49,000 ft) and at high geomagnetic latitudes.
Production rates vary because of changes to the cosmic ray flux caused by the heliospheric modulation (solar wind and solar magnetic field), and due to variations in the Earth’s magnetic field.
The atmospheric half-life for removal of 14CO2 has been estimated to be roughly 12 to 16 years in the northern hemisphere. ”
https://en.wikipedia.org/wiki/Carbon-14
“On the assumption that Figure 4 stays straight through 2100, and Earth doesn’t get hit in the meantime by something much worse than anything since 1850 such as a supervolcano or asteroid, I feel pretty comfortable with my “Two more degrees” forecast for the 75 years 2063-2137.”
OK – answers the question “will Planet Earth get uninhabitable 12+ resp. 75+ years from now” with – NO.
Well done, thx !