How constant is the “solar constant?”

By Andy May

The IPCC lowered their estimate of the impact of solar variability on the Earth’s climate from the already low value of 0.12 W/m2 (Watts per square-meter) given in their fourth report (AR4), to a still lower value of 0.05 W/m2 in the 2013 fifth report (AR5), the new value is illustrated in Figure 1. These are long term values, estimated for the 261-year period 1750-2011 and they apply to the “baseline” of the Schwabe ~11-year solar (or sunspot) cycle, which we will simply call the “solar cycle” in this post. The baseline of the solar cycle is the issue since the peaks are known to vary. The Sun’s output (total solar irradiance or “TSI”) is known to vary at all time scales (Kopp 2016), the question is by how much. The magnitude of short-term changes, less than 11 years, in solar output are known relatively accurately, to better than ±0.1 W/m2. But, the magnitude of solar variability over longer periods of time is poorly understood. Yet, small changes in solar output over long periods of time can affect the Earth’s climate in significant ways (Eddy 1976) and (Eddy 2009). In John Eddy’s classic 1976 paper on the Maunder Minimum, he writes:

“The reality of the Maunder Minimum and its implications of basic solar change may be but one more defeat in our long and losing battle to keep the sun perfect, or, if not perfect, constant, and if inconstant, regular. Why we think the sun should be any of these when other stars are not is more a question for social than for physical science.” (Eddy 1976)

Using recent satellite data, it has been determined that the Sun puts out ~1361 W/m2, measured at 1AU, the average distance of the Earth’s orbit from the Sun. Half of the Earth’s surface is always in the dark and the sunlight hits most latitudes at an angle, so to get the average absorbed or reflected we divide by 4, to get ~340 W/m2. Then after subtracting the energy reflected by the atmosphere and the surface, we find the average radiation absorbed is about 240 W/m2.

The Earth warms when more energy is added to the climate system, the added energy is called a climate “forcing” by the IPCC. The total anthropogenic forcing over the industrial era (1750 to 2011, 261 years), according to the IPCC (IPCC 2013, 661), is about 2.3 (1.1-3.3) W/m2 or about 1.0% of 240. Also, on page 661, the IPCC estimates the total forcing due to greenhouse gases in 2011 to be 2.83 (2.54-3.12) W/m2. The forcing for CO2 alone is 1.82 (1.63-2.01) W/m2. They further estimate that the growth rate of CO2 caused forcing, from 2001 to 2011 is 0.27 W/m2 or 0.027 W/m2/year using the same methods. These are a lot of numbers, so we’ve summarized them in Table 1 below.

IPCC Anthropogenic Forcing Estimates (AR5, page 661)

Time period Cause Years Total Forcing Forcing per year Percent of 240
      W/m2 W/m2/yr W/m2
1750-2011 Humans

261

2.3

0.0088

0.96%

1750-2011 CO2

261

1.82

0.0070

0.76%

2001-2011 CO2

10

0.27

0.0270

0.11%

Table 1. Anthropogenic forcing as estimated by (IPCC 2013, 661).

The IPCC’s assumed list of radiative forcing agents and their total forcing from 1750 to 2011 are shown in Figure 1. Next to the IPCC table, I’ve shown the Central England temperature (CET) record, which is the only complete instrumental temperature record that goes back that far in time. The CET is mostly flat until the end of the Little Ice Age and then, after a dip around the time of the Krakatoa volcanic eruption in 1883, it shows warming to modern times.

Figure 1. On the left is the IPCC list of radiative forcing agents from page 697 of AR5 WG1 (IPCC 2013). Notice they assume that solar irradiance is very small, in this post we examine this assumption. On the right is the Central England temperature record (CET), the only instrumental temperature record that goes back to 1750. The CET data source is the UK MET office.

If the Sun were to supply all 2.3 W/m2 of the forcing described in Figure 1, but as a steady change over 261 years, the change each year would have to average ~0.0088 W/m2/year. So, assuming constant albedo (reflectivity) the change in solar output would have to be 4×0.0088 on average or 0.035 W/m2/year. As noted above, we multiply by four because the Earth is a sphere and half of it is in the dark. This is a total increase in solar output of 9.2 W/m2 over 261 years (1750-2011), a change of 0.7%. Some might say we should start at 1951, since that is the agreed date when CO2 emissions became significant (IPCC 2013, 698-700). But, I started at 1750 to cover the “industrial era” as defined by the IPCC, the choice is somewhat arbitrary as long as we go back far enough to precede any significant human CO2 emissions. The year 1750 is also useful because it is near the end of the worst part of the Little Ice Age, the coldest period in the last 11,700 years (the Holocene). Do we know the solar output, over the past 261 years, accurately enough to say the Sun could not have changed 9.2 W/m2 or some large portion that amount? In other words, is the IPCC assumption that solar variability has a very small influence on climate valid?

How accurate are our measurements of Solar output?

The solar cycle variation of TSI is about 1.5 W/m2 or 0.1% from peak to trough (~5-7 years) or 0.25 W/m2/year and 0.02%/year. These changes are much larger than the longer-term changes of 0.0088 W/m2/year computed above. So, simply because we can see the ~11-year solar cycle does not necessarily mean we can see a longer-term trend that could have caused current warming. Satellite TSI measurement instruments deteriorate under the intense sunlight they measure and they lose accuracy with time. We have satellite measurements of varying quality over much of the last four solar cycles. The raw data are plotted in Figure 2 and the critical ACRIM gap is highlighted in yellow. Because the Nimbus7/ERB (Earth Radiation Budget) and ERBS/ERBE instruments are much less precise and accurate than the ACRIM (Active Cavity Radiometer Irradiance Monitor) instruments, filling this gap is the most important problem in making a long-term TSI composite (Scafetta and Willson 2014).

Figure 2. Raw satellite total solar irradiance (TSI) measurements. The ACRIM gap is identified in yellow. The trend of the NIMBUS7/ERB instrument in the ACRIM gap is emphasized with a red line. Source: (Soon, Connolly and Connolly 2015).

As Figure 2 makes clear, calibration problems have caused the satellites to measure widely different values of TSI, the solar cycle minima range from 1371 W/m2 to 1360.5 W/m2. Currently, the correct minimum is thought to be around 1360.5, but just a few years ago it was thought to be ~1364 W/m2 (Haigh 2011). After calibration corrections have been applied, each satellite produces an internally consistent record, but the records are not consistent with one another and no single record covers two or more complete solar cycles. This makes the determination of long-term trends problematic.

There have been three serious attempts to build single composite TSI records from the raw data displayed in Figure 1. They are shown in Figure 3.

Figure 3. Three common composites of the data shown in Figure 1. The ACRIM gap is identified in yellow. The PMOD composite is by P.M.O.D. (Frohlich 2006) also the source of the figure (pmodwrc.ch), the ACRIM composite is from the ACRIM team (Scafetta and Willson 2014), the IRMB composite is from the Royal Meteorological Institute of Belgium (Dewitte, et al. 2004).

The ACRIM and IRMB composites show an increasing trend during the ACRIM gap and the PMOD composite shows a declining trend. This figure was made several years ago by the PMOD team when the baseline of the TSI trend was more uncertain, so the IRMB and PMOD composites are shown with a ~1365 W/m2 base and the ACRIM composite is shown with a ~1360.5 W/m2 baseline, which is currently preferred. The important point, shown in Figure 3, is that the long-term PMOD trend is down, the ACRIM trend is up to the cycle 22-23 minimum (~1996) and then down to the cycle 23-24 minimum (~2009), and the IRMB trend is up. Thus, the direction of the long-term trend is unclear. Figure 4 shows the details of the PMOD and ACRIM trends, this is from (Scafetta and Willson 2014).

Figure 4. The ACRIM and PMOD composites showing opposing slopes in the solar minima and in the ACRIM gap, highlighted in yellow. Source: (Scafetta and Willson 2014).

In Figure 4 we see the differences more clearly. The ACRIM TSI trend from the solar cycle low between 21 and 22 to 22-23 is +0.5 W/m2 in 10 years or 0.05 W/m2 per year, then the trend is down to the cycle 23-24 minimum. The PMOD composite is steadily down about 0.14 W/m2 in 22 years (1987-2009) or 0.006 W/m2/year. The difference in these trends is 0.056 W/m2/year. If this is extrapolated linearly for 261 years, the difference is 14.6 W/m2, more than the 9.2 W/m2 required to cause the recent warming.

NOAA believe the SORCE satellite TIM (Total Irradiance Monitor) instrument is accurate and accepts the ~1360.5 TSI baseline it establishes. They have normalized the three composites discussed above to this baseline. After normalizing, they averaged the three composites to produce the record shown in Figure 5. The SORCE/TIM record starts in February 2003, so the average after that is replaced by the SORCE/TIM record. Averaging three records with differing trends creates a meaningless trend, so this TSI record is of little use for our purposes, but they also construct an uncertainty function (something notably missing for the individual composites) using the differences between the composites and the estimated instrument error. The NOAA composite is shown in Figure 5 and their computed uncertainty is shown in Figure 6, both figures show the raw data and a 100-day running average.

Figure 5. The NOAA/NCEI composite. It is the average of the three composites shown above, with the data after Feb. 2003 replaced by the SORCE/TIM data The ACRIM gap is indicated in yellow. The low points between solar cycles 21-23 and 22-23 are marked on the plot. Data source: NOAA/NCEI.

In the NOAA composite (Figure 5) the increase in the solar minimum value from the solar cycle 21-22 minimum to the solar cycle 22-23 minimum appears, just as it does in the IRMB and the ACRIM composites. The solar cycle 23-24 minimum drops down to the level of the 21-22 minimum, but this is a forgone conclusion since the earlier records are normalized to this value in the SORCE/TIM record. In fact, given that everything is normalized to TIM, we only have two points in this whole composite that we can try and use to determine a long-term trend, the 21-22 minimum and the 22-23 minimum, the peaks cannot be used since they are known to be variable (Kopp 2016). Thus, we don’t know very much.

Figure 6. NOAA TSI uncertainty, computed from the difference between the ACRIM and PMOD values, after normalization to the SORCE/TIM values, plus an assumed 0.5 W/m2 uncertainty in the SORCE/TIM absolute scale until the TIM data and uncertainties are available after Feb. 2003. A rapid increase in the computed TIM error occurs late in 2012. The ACRIM gap is highlighted in yellow. The low points between solar cycles 21-23 and 22-23 are marked on the plot. Data source: NOAA/NCEI.

Greg Kopp has calculated that in order to observe a long-term change in solar output of 1.4 W/m2 per century, or about 3.5 W/m2 since 1750, which is 38% of the total 9.2 W/m2 required to explain modern warming; non-overlapping instruments would need an accuracy of ±0.136 W/m2 and 10 years of measurements to even see the change (Kopp 2016). As Figure 6 makes clear, the SORCE/TIM instrument, the best instrument in orbit today, has an uncertainty at least 3.5 times the required level to detect such a trend and it decayed rapidly after 10 years.

Discussion

The estimated uncertainty in the NOAA satellite composite is well over 0.5 W/m2 and it increases as a function of time before 2003. The three original composites come with no estimated uncertainty, their accuracy, or lack of it, is unknown. NOAA simply used the differences in the composites to estimate the uncertainty. This makes estimating a trend from the satellite data problematic (Haigh 2011). To look at the longer term, we must rely on solar proxies, such as sunspot counts and proxies of the strength of the solar magnetic field. The relationship of the proxies to solar output is not known and can only be estimated by correlating the proxies to satellite data. Professor Joanna Haigh summarizes this in the following way:

“To assess the potential influence of the Sun on the climate on longer timescales it is necessary to know TSI further back into the past than is available from the satellite data … The proxy indicators of solar variability discussed above have therefore been used to produce an estimate of its temporal variation over the past centuries. There are several different approaches taken to ‘reconstructing’ the TSI, all employing a substantial degree of empiricism and in all of which the proxy data (such as sunspot number) are calibrated against the recent satellite TSI measurements, despite the problems with this data outlined above.” (Haigh 2011)

The uncertainty in these proxy estimates cannot be quantified, but it must be greater than the potential error (uncertainty) in the satellite data, which varies from 0.48 W/m2 to over 0.8 W/m2. Let’s return to the slopes discussed above and illustrated in Figure 4. If we combine the opposing slopes of the ACRIM and PMOD composites, the difference is 0.056 W/m2/year. The NOAA estimated uncertainty (Figure 6) in the cycle 21-22 minimum is over 0.7 W/m2 and in the 22-23 minimum it is over 0.6 W/m2. If this uncertainty is considered, the extrapolated long-term linear trend could be as high as 0.13 W/m2 to 0.18 W/m2/year. Over 261 years, these values could add up to 34 to 47 W/m2. Both values are much higher than the 9.2 W/m2required to account for the roughly one-degree of warming observed over the past 261 years (see Figure 1 and the discussion).

Given the way the composites have been generated, we only have two points to work with in determining a long-term solar trend, the points are the lows of solar cycles 21-22 and 22-23. Everything has been adjusted to the low of solar cycle 23-24, so it isn’t usable. With two points all you get is a line and a linear change is unlikely for a dynamo. Basically, the satellite data is not enough.

We have no opinion on the relative merits of the three composite TSI records discussed. There are, for the most part, logical reasons for all the corrections made in each composite. The problem is, they are all different and have opposing trends. Each composite selects different portions of the available satellite records to use and applies different corrections. The resulting, different long-term trends are simply a reflection of the component instrument instabilities (Kopp 2016). For discussions of the merits of the ACRIM composite see (Scafetta and Willson 2014), for the PMOD composite see (Frohlich 2006), for the IRMB composite see (Dewitte, et al. 2004). There are arguments for and against each composite. There are also numerous papers discussing how to extend the TSI record into the past using solar proxies. For a discussion of some of the most commonly used TSI reconstructions of the past 200 years see (Soon, Connolly and Connolly 2015). The problem with the proxies is that the precise relationship they have with TSI or solar output in general is unknown and must be based on correlations with the, unfortunately, flawed satellite records.

Whether one matches a proxy to the ACRIM or PMOD composite can make a great deal of difference in the resulting long term TSI record as discussed in (Herrera, Mendoza and Herrera 2015). As the paper makes clear, reasonable proxy correlations to the ACRIM and PMOD composites can result in computed values of TSI, in the 1700s, that are more than two W/m2 different. Kopp discusses this problem in more detail in his 2016 Journal of Space Weather and Space Climate article:

“TSI variability on secular timescales is currently not definitively known from the space-borne measurements because this record does not span the desired multi-decadal to centennial time range with the needed absolute accuracies, and composites based on the measurements are ambiguous over the time range they do cover due to high stability-uncertainties of the contributing instruments.” (Kopp 2016)

Kopp also provides us with the following plot (Figure 7) comparing different historical TSI reconstructions. The red NRLTSI2 reconstruction is the one that will be used for the upcoming IPCC report and in CMIP6 (Coupled Model Inter-comparison Project Phase 6). The TSI reconstructions plotted in Figure 7 are all empirical and make use of various proxies of solar activity (but mainly sunspot counts) and their assumed relationship to total solar output. Figure 7 illustrates some of the uncertainty in these assumptions.

Figure 7. Various recent published TSI reconstructions. The NRLTSI2 reconstruction will be used for the upcoming IPCC report and CMIP6. There is a great deal of spread during the Maunder Minimum, over 2 W/m2 and the long-term trends are very different. The figure is modified after one in (Kopp 2016).

In answer to the question posed at the beginning of the post, no we have not measured the solar output accurately enough, over a long enough period, to definitively say solar variability could not have caused all or a significant portion of the warming observed over the past 261 years. The most extreme reconstruction in Figure 7 (Lean, 2000), suggests the Sun could have caused 25% of the warming and this is without considering the considerable uncertainty in the TSI estimate. There are even larger published TSI differences from the modern day, up to 5 W/m2 (Shapiro, et al. 2011), (Soon, Connolly and Connolly 2015) and (Schmidt, et al. 2012). We certainly have not proven that solar variability is the cause of all or even a large portion of the warming, only that we cannot exclude it as a possible cause, as the IPCC appears to have done.

Works Cited

Dewitte, S., D. Crommelynck, S. Mekaoui, and A. Joukoff. 2004. “Measurement and Uncertainty of the Long-Term Total Solar Irradiance Trend.” Solar Physics 224 (1-2): 209-216. doi:https://doi.org/10.1007/s11207-005-5698-7 .

Eddy, John. 1976. “The Maunder Minimum.” Science 192 (4245). https://www.jstor.org/stable/1742583?seq=1#page_scan_tab_contents.

—. 2009. The Sun, the Earth and near-Earth space: a guide to the Sun-Earth system. Books express. https://www.amazon.com/Sun-Earth-Near-Earth-Space-Sun-Earth/dp/1782662960/ref=sr_1_2?ie=UTF8&qid=1537190646&sr=8-2&keywords=The+Sun%2C+the+Earth+and+near-Earth+space%3A+a+guide+to+the+Sun-Earth+system.

Fox, Peter. 2004. “Solar Activity and Irradiance Variations.” Geophysical Monograph (American Geophysical Union) 141. https://www.researchgate.net/profile/Richard_Willson3/publication/23908159_Solar_irradiance_variations_and_solar_activity/links/00b4951b20336363f2000000.pdf.

Frohlich, C. 2006. Solar Irradiance Variability since 1978. Vol. 23, in Solar Variability and Planetary Climates. Space Sciences Series of ISSI, by Calisesi Y., Bonnet R.M., Langen J. Gray L. and Lockwood M. New York, New York: Springer. https://link.springer.com/chapter/10.1007/978-0-387-48341-2_5.

Haigh, Joanna. 2011. Solar Influences on Climate. Imperial College, London. https://www.imperial.ac.uk/media/imperial-college/grantham-institute/public/publications/briefing-papers/Solar-Influences-on-Climate—Grantham-BP-5.pdf.

Herrera, V. M. Velasco, B. Mendoza, and G. Velasco Herrera. 2015. “Reconstruction and prediction of the total solar irradiance: From the Medieval Warm Period to the 21st century.” New Astronomy 34: 221-233. https://www.sciencedirect.com/science/article/pii/S1384107614001080.

IPCC. 2013. In Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, by T. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley. Cambridge: Cambridge University Press. https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_SPM_FINAL.pdf.

Kopp, Greg. 2016. “Magnitudes and timescales of total solar irradiance variability.” J. Space Weather and Space Climate 6 (A30). https://www.swsc-journal.org/articles/swsc/abs/2016/01/swsc160010/swsc160010.html.

Scafetta, Nicola, and Richard Willson. 2014. “ACRIM total solar irradiance satellite composite validation versus TSI proxy models.” Astrophysics and Space Science 350 (2): 421-442. https://link.springer.com/article/10.1007/s10509-013-1775-9.

Schmidt, G.A., J.H. Jungclaus, C.M. Ammann, E., Braconnot, P. Bard, and T. J. Crowley. 2012. “Climate forcing reconstructions for use in PMIP simulations of the last millennium.” Geosci. Model Dev. 5 185-191. http://pubman.mpdl.mpg.de/pubman/faces/viewItemOverviewPage.jsp?itemId=escidoc:1407662.

Shapiro, A., W. Schmutz, E. Rozanov, M. Schoell, M. Haberreiter, A. V. Shapiro, and S. Nyeki. 2011. “A new approach to the long-term reconstruction of the solar irradiance leads to large historical solar forcing.” Astronomy and Astrophysics 529 (A67). https://www.aanda.org/articles/aa/pdf/2011/05/aa16173-10.pdf.

Soon, Willie, Ronan Connolly, and Michael Connolly. 2015. “Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th century.” Earth Science Reviews 150: 409-452. https://www.sciencedirect.com/science/article/pii/S0012825215300349.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
229 Comments
Inline Feedbacks
View all comments
September 19, 2018 10:20 am

Andy, good article showing that the Sun is neglected with insufficient information to do so. I think your central point is well supported on the available evidence.

Given the way the composites have been generated, we only have two points to work with in determining a long-term solar trend, the points are the lows of solar cycles 21-22 and 22-23.

I have something to add to this point.

Although the relationship between solar activity and its proxies is less tight than what some solar physicists would like us to believe, both the smoothed sunspot number at solar minima, and the number of spotless days per solar minimum, support a lower level of activity for the SC22-23 minimum than for the SC21-22 minimum.

http://www.sidc.be/silso/IMAGES/GRAPHICS/spotlessJJ/SC25_SCvsNumber.png

So proxies support the PMOD composite (figure 3 top, figure 4 bottom), over the ACRIM and IRMB composites. Solar activity at the minima since 1980 appears to have been decreasing.

Reply to  Javier
September 19, 2018 11:27 am

proxies support the PMOD composite

Hi Javier. PMOD versions since 2016, especially the last version are showing a decline into this minimum to a lower level now than the previous 2008/9 minimum:

comment image

I find this to be out of line with the number of x-ray flux days of “A0.0” or lower as occurred during the last minimum (true for zero sunspot days too). We are just now ahead of 2006 in “A0.0” days, with SORCE TSI tracking near 2006 too:

comment image

TSI and other solar indices being less tight than expected is because it takes time for the sun’s plasma to re-organize and diminish after new active regions emerge, grow, then decay. The plasma activity and TSI generated by it persists for several months before tapering off, and new regions will add to it.

The emergence of new regions will keep TSI kiting higher, until lower activity ensues. There is a lag at the sun between sunspot activity peaks and peak TSI from them. At the end of the cycle, where we’re at now, the emergence of very small spots will keep TSI kiting upward in small pulses, as it did this summer. Now the TSI 90day trend is falling but the level still isn’t where it was during the last minimum for times of similar sunspot number and F10.7cm. Why? The sun’s surface plasma hasn’t diminished enough yet, as evidenced by the fact we haven’t had as many zero sunspot days nor x-ray A0.0 days.

SORCE TSI should fall further as the number of x-ray A0.0 and sunspot number zero days increases into the minimum. Whatever difference in correlation to F10.7cm at this solar minimum compared to the previous minimum will define the true degradation for SORCE for me, and TSIS-1 is on the way. PMOD is over-corrected in my view. How do we know PMOD is wrong today? Because they’ll change it in the next version in a few months! There have been up to 9 changes to most years of PMOD TSI since version 1508.

Reply to  Bob Weber
September 19, 2018 2:49 pm

“PMOD versions since 2016, especially the last version are showing a decline into this minimum to a lower level now than the previous 2008/9 minimum:
I find this to be out of line with the number of x-ray flux days of “A0.0” or lower”

I agree. That doesn’t seem to be correct.

Reply to  Andy May
September 19, 2018 2:56 pm

“What is clear to me (for what that is worth) is that we don’t have accurate enough measurements for long enough, to establish a TSI trend.”

I agree. I would even go further and say that even establishing a trend from current measurements is shaky. Instruments slowly die while measuring TSI and that is not a simple problem to solve.

“Further, we can’t even be sure TSI is the whole story, other solar factors may be important.”

Or we can be pretty sure that TSI IS NOT the whole story on solar activity effect on climate.

“My wish is that the IPCC would take solar activity variations at least as seriously as they take CO2 levels.”

That is nearly impossible as the IPCC is a political body created with the only goal of blaming climate change on human factors. Unless solar variability can be blamed on humans and taxed there’s no chance.

Reply to  Javier
September 19, 2018 3:54 pm

Or we can be pretty sure that TSI IS NOT the whole story on solar activity effect on climate.

Solar wind driven geomagnetic activity does affect the polar vortex, but does it add heat?

The TSI trend has also changed over time since the Maunder minimum.

fwiw TSI is currently having the same effect on the downside of this solar cycle as during the last:

comment image

September 19, 2018 10:36 am

The solar inconstant is the inverse of the AMO.

comment image

Editor
Reply to  Ulric Lyons
September 21, 2018 6:23 am

Thanks, nice plot.

Reply to  Andy May
September 21, 2018 7:24 pm
September 19, 2018 10:40 am

Half the reason the climate change models have no predictive power is their complete omission of Henry’s Law of Solubility. That’s the physical phenomenon by which the oceans regulate the concentration of CO2 in the atmosphere. It is a negative feedback that mitigates the forces that would tend to change the concentration, namely the biosphere, volcanoes, and human activity.

The other half of the model failure is its assumption of a constant albedo. Cloud cover albedo adjusts daily to the intensity of solar radiation. Its the cloud burn-off effect, and it amplifies TSI. It is a positive feedback to solar radiation that tends to account for the now dated published reports of atmospheric amplification of TSI. It’s a phenomena that occurs at low latitudes in the headwaters of the so-called ThermoHaline Circulation, where it is carried in the mixed layer to the poles, then to the deep ocean, to emerge at the Equation on a time scale of one millennium. Those lags appear in the history comparing Sea Surface Temperature to solar radiation.

So Anthropogenic Global Warming (AGW), the model that concludes that CO2 and not the Sun is controlling Earth’s surface temperature variations, is wrong twice, both times on the physics. It’s like a vast mobile not fastened at the ceiling.

A model with no predictive power is not a real scientific model, regardless that it might meet all three of Popper’s “Intersubjectivity” criteria of peer review, consensus support, and publication in an approved professional journal, (“even if this is limited to a circle of specialists” he added). This, along with eschewing definitions and pragmatism, comprise the steps in Popper’s deconstruction of Modern Science by which he removed objectivity from Modern Science, tossing the carcass to academics and the “publish or perish” movement.

Of course, holding academic science (i.e., Post Modern Science) to the standards of Modern Science is unfair, but it is inevitable as the promise of AGW, AKA “climate change”, falls farther and farther behind the curve.

Reply to  Jeff Glassman
September 19, 2018 11:16 am

There should be a CO2 negative feedback function with the North Atlantic being warmer during solar minima. Cloud cover though has declined with the shift to the warm AMO phase, which would imply that is also a negative feedback.
http://digital.csic.es/bitstream/10261/67041/3/Atlantic_Ocean_CO2_uptake.pdf

John Tillman
Reply to  Jeff Glassman
September 19, 2018 12:05 pm

Jeff,

The GCMs don’t do clouds. They “parameterize” them, so that their net effect can be whatever the programmer wants it to be.

IOW, GIGO.

Reply to  John Tillman
September 20, 2018 5:20 am

There are an increasing number of papers making the claim that warming has driven a decline in tropical low level cloud cover. It doesn’t seem to occur to them that the decline in low cloud cover has driven the warming. Primarily because they have a big fat ‘official’ CO2 warming signal in their minds that they have to apply to the models without fail, so it must be anthropogenic warming what done it look, what on Earth else could it be etc.

Reply to  John Tillman
September 20, 2018 10:12 am

Everything in a GCM is ultimately a programmer decision. And it doesn’t matter whether albedo is paramaterized, pasteurized, or homogenized. In the Real World, albedo is dynamic, a positive feedback to and an amplifier of solar radiation. In the models albedo is static — a constant. Result: the models cannot get the effects of the Sun right.

The part of a model that counts is its predictive power, and the only accessible prediction known within those GCMs is Equilibrium Climate Sensitivity. But the models get that number, one critical to its distant projections beyond our lifetimes and experience, wrong by a huge amount plus a sign error! IPCC reports ECS >1.5C/2xCO2 (90% confidence), >3 (50%), and >4.5 (17%). Those form a straight line in log space yielding a pitiful 2.2% IPCC confidence for the 0.7 C/2xCO2 estimated by Lindzen & Choi (2011)).

IPCC’s defines ECS as the rise in temperature following a (step) increase in CO2. That’s the way it works in the models, but not in the Real World. Real CO2 doesn’t lead temperature; atmospheric CO2 lags surface temperature. Lindzen & Choi assumed a lead/lag relationship. The L&C ECS would be far more accurate stated as -0.7C/2xCO2.

Real science requires both fidelity to definitions and empirical support for parameters, most especially leads and lags.

Reply to  Jeff Glassman
September 20, 2018 10:28 am

In the real world the AMO is warmer during solar minima, and drives a decline in low cloud cover. That is a negative feedback.

John Tillman
Reply to  Jeff Glassman
September 20, 2018 10:45 am

Jeff,

L&C make sense to me because, on a homeostatic water world, there is good reason to conclude that net feedbacks should indeed be negative. Hence, a lab value of 1.1-1.2 degrees C per doubling of CO2 concentration might well turn out to be 0.7 degree C in the real, complex climate system.

Add in other human effects besides GHGs, and the sign of total man-made effects on climate could be negative as well, ie net cooling. But in any case, the effect is negligible on a global basis. Locally and regionally, the effects can be more detectable.

Clyde Spencer
September 19, 2018 11:59 am

Andy,

You said, “Then after subtracting the energy reflected by the atmosphere and the surface, we find the average radiation absorbed is about 240 W/m2.”

I would submit that the stated amount absorbed is an upper-bound because NASA treats the reflectivity of water as though the sun always has a small angle of incidence, and ignores the impact of specular reflection on the limbs of the Earth.

https://wattsupwiththat.com/2016/09/12/why-albedo-is-the-wrong-measure-of-reflectivity-for-modeling-climate/

JoshC
September 19, 2018 12:32 pm

Nicholas:

Came here to say this. You did a better job than I would have.

The data is averaged for 1AU, (line 5) but if you look (line 10 iirc) it includes the actual reading (if I can find the original dataset. Might have to hunt the SORCE raw data back again and double check, been a bit.

That being said, in another discussion I also brought up that using absolute temperature we see a change of .8 degrees Kelvin, out of 288 degrees, a .3% change in temperature. If there was a direct correlation, that would mean a 4w/m^2 change in 1365w/m^2 would result in all of the changes to date.

I have seen anything from 1/1.5 to 4 or even 6 in that same time period though.

September 19, 2018 12:44 pm

One can clearly see that the sun entered an inactive mode of operation in late year 2005 from a very active mode of operation prior to this time and the differences are very obvious.

If the sun stays in this inactive mode (which I think it will for many years to come) the climatic impacts (which have already started) will become more pronounced.

It is the sun and it’s secondary and primary effects(accumulation effects)which control the climate moderated by the geo magnetic field.

Climate changes are here and will continue moving forward and in the process AGW should be rendered obsolete. I say by no later then 2020.

c cook
September 19, 2018 1:12 pm

As discussed, there seems to be a number of positive feedback factors that accentuate warming during periods of high solar output and conversely accentuate cooling during periods of low solar output.
Some potential factors have not been mentioned and as such, does anyone have further information or thoughts on these:
1)
UV output from the sun contributes about 10% of TSI.
UV output seems to vary as much as 10 times that of the visible and IR spectrum (How Does the Sun’s Spectrum Vary? Judith L. Lean)
During periods of higher solar output, UV output increases more than the visible and IR. Increased UV breaks down Ozone in the atmosphere and as a consequence less UV is absorbed in the upper atmosphere causing it to cool but conversely warming in particular the oceans which readily absorb UV.
This is potentially an additional positive feedback effect which also reinforces cooling during periods of low solar output whereby reduced UV increases ozone, warming the upper atmosphere which can more readily radiate heat to space while reducing the UV energy absorbed by the oceans.
Would anyone have any thoughts on this process and perhaps some actual solid numbers on the magnitude of this effect if any?

2)
We have seen an 18% increase in cosmic rays between mar18 and jul18 (Spaceweather.com)
This is during a period of decreasing solar activity and also decreasing magnetic field strength on earth.
As discussed by many, weakening of the solar and earths’ magnetic fields causes an increase in cosmic ray influx which can lead to increased cloud seeding and increased albedo and thus reflection and cooling.
The question here; is there any mechanism that would cause the earth’s magnetic field to be weakened if the sun’s magnetic is weakening ?
If yes then perhaps this is an additional positive feedback factor, perhaps there is no interaction or perhaps the effect is opposite and overall feedback is negative. Would anyone have any further thoughts on this?

All my best regards

September 19, 2018 2:02 pm

I think the weakening of the earth’s magnetic field at the same time the sun’s magnetic field is weakening is random. Not related.

That said it is of paramount importance as far as the climate is concerned when the two fields are in sync as they now are.

The questions are as follows:

To what degree of magnitude change will the strength of both fields decrease to?

How fast in the case of the geo magnetic field will the weakening progress?

How far away from the geographical poles will the North and South magnetic poles migrate?

What will be the duration of time of the weakening magnetic field events?(both solar/geo)

What are the threshold levels of the weakening of the two combined fields that would result in major climatic changes as opposed to minor changes? That is the 64 million dollar question and I do not know what the threshold levels are, except to say that they are out there.

There is a strong case to be made that if galactic comic rays increase enough they are going to have climatic impacts ranging from geological activity to global cloud coverage.

Changes in the global electrical circuit/Forbush events lend support to the galactic cosmic ray global cloud cover connections. It however can be masked at times, as many of the solar/climatic connections are if threshold levels are not attained.

MarkW
September 19, 2018 2:10 pm

How constant is TSI?

Constant enough for government work.

September 19, 2018 7:05 pm

Hi Griff! Another new name, cool!

September 20, 2018 12:12 am

What really is amazing is how stable is our G2V stellar object 1 AU away.
Utterly amazing how nice our Sun is. Serious.

Every G class stellar observed for a significant period has shown large flaring activity…. some on the order of X70 class. (the Carrington Event was a X40 class, that is 3 orders of mag below 70).
If our sun did that on a regular basis, we would not be here.

Why is that?

Reply to  Joel O'Bryan
September 20, 2018 1:31 am

What really is amazing is how stable is our G2V stellar object 1 AU away.
Utterly amazing how nice our Sun is. Serious.

I don’t find it amazing at all. Observer bias. Was it not extremely stable we wouldn’t be discussing about it.

The Universe has a huge number of things set in such a particular way as to make us possible. From the values of some universal constants to the curious fact that solid water has a lower density than liquid water. Only two possible explanations. Either God, or we are just in the right Universe/Solar System/Binary Planet of the nearly infinite possibilities, and we are just suffering from a very acute case of observer bias. My mind chokes when I reach that point. It wasn’t designed by evolution for that task.

Drake equation, popularized by Carl Sagan, is absolutely useless and pure speculation. This universe is our universe because it became possible for us to exist until now. No guarantees from now on.

Reply to  Joel O'Bryan
September 20, 2018 7:14 am

Joel — agree. Prb’ly the reason life & after a very long time, intelligent life (such as it is) developed , is the very stability of Old Sol. Yes, it VERY gradually increases in brightness, but so slowly that life has adapted & the earth itself has responded to maintain it.

Geoff Sherrington
September 20, 2018 7:24 pm

The official treatment of errors is not done in the classic manner.
The work fails, and fails badly.
There has to be an explanation of why there is a departure from classic error analysis.
In this out-in-space setting, the most plausible way to measure accuracy is to sample the variable TSI with different styles of equipment, then to make an ovberall envelope that contains most of the data, sometimes with implausible outliers excluded.
We have several sets of instruments on different satellites. None of them has been shown capable of accurate measurement of the main variable to the accuracy that is sought for meaningful further use. The range of values from about 1360 to 1374 w/m2 equivalent, some 14 w/m2, is well beyond the +/-0.1 w/m2 that researchers would like to be able to measure for uses such as determining the absolute variation of TSI. Two orders of magnitude worse.
In classical analysis, (here roughly eyeballed, though the principles apply) the error should be expressed in the first place as 1366 +/- 6 w/m2 with the number of sigmas stated. Next, if there a reason to reduce this error, as by an adjustement to the readings from one satellite after another, then the error envelope can be lessened, provided that (and this is a very big proviso) provided that the errors involved in that adjustment are known and insignificant.
In the present case, ab in itio, one does not know if one satellite gave more accurate results than another (or, ideed, if they are not all wrong). In the ideal case, an adjustment would not be allowed on the data from one satellite unless that adjustment can be measured in some way, as by on-ground post-event simulation of the error and its magnitude. For imaginary example, it might be found that one satellite did not capture enough irradiation and this can be corrected by changing the shape/size of a mask on a similar device on the ground.
Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Please note that I have not discussed precision, being the scatter of results about a mean that hopefully has no scatter in the ideal case.

No amount of adjustment will lead to the correct physics unless the correction can be reproduced and validated, and its magnitude and sign and intrinsic error validated. In that I wish the researchers the best of luck, because in the classical sense one would not use these results because they are not accurate enough. Geoff.
(p.s. I have not yet read the comments of others, but I shall, so some of what I wrote might be discounted already.)

Reply to  Geoff Sherrington
September 20, 2018 8:09 pm

one does not know if one satellite gave more accurate results than another
The differences between satellites are not due to random ‘errors’, but to systematic differences stemming from different constructions of the sensors and to their sensitivity to degrading due to the harsh space environment. For the most part, those systematic differences are understood and can be corrected for. The composite record is thus much more accurate than each of the individual series that go into the composite.

Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Both of these statements are not correct. In particular since proxies are not used to adjust TSI.

Editor
Reply to  Leif Svalgaard
September 21, 2018 6:46 am

Leif,

The composite record is thus much more accurate than each of the individual series that go into the composite.

This cannot be true.

Reply to  Andy May
September 21, 2018 6:54 am

This cannot be true
Of course it is true. when you identify and correct the systematic errors that the individual series have, the composite will be free of those known errors and thus much more accurate.
Even if you just blindly average the series without correcting for the systematic errors you still get a composite that is more accurate. This is the standard justification for averaging: the resulting error decreases with the square root of the number of individual data: https://en.wikipedia.org/wiki/Standard_error

Reply to  Andy May
September 27, 2018 6:21 am

You clearly have no idea about this. The corrections are possible because the sensors overlap in time. There are no disagreements about this exceeding about half a Watt, which then becomes the precision of the composite.
Claus Froehlich has a good discussion of this:
https://www.leif.org/research/TSI-Uncertainties-Froehlich.pdf
“Figure 6b supports the idea that a reliable composite for the three and half solar cycles can indeed be constructed by using the corrected series instead of the original ones.”
The uncertainty

Reply to  Andy May
September 27, 2018 6:33 am

A more technical paper on this:
https://arxiv.org/pdf/1702.02341.pdf
Their Figure three shows that the uncertainty is less than 0.5 W/m2 and less than 0.2 W/m2 after 1995.

Reply to  Andy May
September 27, 2018 1:45 pm

Figure three:

comment image

Editor
Reply to  Leif Svalgaard
September 21, 2018 6:57 am

Leif,

Proxies are far too inaccurate to be used to adjust TSI. Mostly, they are calibrated against TSI so circular logic arises.
Both of these statements are not correct. In particular since proxies are not used to adjust TSI.

The statement is quite true. PMOD is often supported using trends in proxies like SSN.

In any case we have no idea what the accuracy of the TSI composites is, they have no computed error bars due to the way they were constructed. Likewise we have no idea what the accuracy of the proxies is, including the SSN proxies. The NOAA attempt to compute uncertainty in TSI was simply a comparison of various composites and the addition of the instrument error, this is simply an educated guess. And, in any case, the NOAA accuracy is inadequate for our purpose as the post explains.

Reply to  Andy May
September 21, 2018 7:16 am

The statement is quite true. PMOD is often supported using trends in proxies like SSN.
Which is not the same as PMOD being calibrated against the proxies.
The construction of PMOD is compared with with proxies but not forced to match them.

Geoff Sherrington
September 21, 2018 3:39 am

Leif,
Thank you for your comments. I think you being overly iconoclastic.
What defence can you mount to my assertion that it cannot be shown that any of the satellite TSI measurements are adequately accurate?
In this subset field ofclmate science, we have several examples whereby each new instrument was promised to be better than the last – until the next instrument produced its result. Paper after paper has been written on data from earlier instruments, since fallen into disfavour. Few have been retracted or even seen a correction in print.
The Argo floats were said to show previous designs inferior for ocean properties. The sampling of the atmosphere by balloons, rockets and oxygen mcrowave gear on satellites shows large differences, many irreconciled so far. Early ocean pH is disregarded, as are early measurements of CO2 in air. This type of scientific progression is unsurprising and expected.
What is eminently disputable is the treatment of error, especially accuracy sometimes named bias or drift or other wrong terms. You have given no argument to refute that the classic estimate of TSI should start at about 1366 +/- 6 W/m2 and possibly improve cautiously from there. No way can it be justified as better than +/- 1 W/m2. Just wait until the next new instrument starts reporting.
As to your comment that I made 2 wrong statements earlier, (a) when I see proxy data for TSI in units of W/m2, I assume calibration against these satellite direct measurements and (b) I am agreeing with you that proxies are too inaccurate to correct raw TSI measurements from such satellites. Geoff

Reply to  Geoff Sherrington
September 21, 2018 5:03 am

that any of the satellite TSI measurements are adequately accurate?
For the climate debate, the absolute value of TSI is not so important. What matters is the relative variation , e.g. the change over a solar cycle. That change is of the order of 1.5 W/m2. The Sun is rotating and when there are few sunspots TSI does not change much from rotation to rotation. The values 27 days apart vary typically less than 0.05 W/m2 which is the upper limit of the ‘error’ in single day average TSI. The differences between the absolute value of TSI between instruments are systematic differences in the construction and design of the sensor. Such differences are now understood and easily corrected for.

Editor
Reply to  Leif Svalgaard
September 21, 2018 7:22 am

Leif,

For the climate debate, the absolute value of TSI is not so important. 

Only true if one assumes no long-term variability. The absolute value must be very accurate to measure long-term variability as Kopp has shown.

Reply to  Andy May
September 21, 2018 7:30 am

Only true if one assumes no long-term variability. The absolute value must be very accurate to measure long-term variability as Kopp has shown.
The measured and/or derived values of the sun’s magnetic field show no long-term variability at least over the last 300 years, so it is safe to assume no long-term variability over that time. What is important if not the accuracy of the absolute value over time, but the stability of the relative values. You misread [-interpret] what Kopp meant.

Editor
Reply to  Geoff Sherrington
September 21, 2018 7:11 am

+1

Reply to  Andy May
September 21, 2018 7:18 am

-100

September 21, 2018 4:07 am

TSI is the most unreliable data that is out there.

Geoff Sherrington
Reply to  Salvatore Del Prete
September 21, 2018 6:31 am

Sal,
It has to be amongst the worst.
The treatment of errors is formalised in publications such as those from the Paris based Bureau of Weights and Measures. Failure to conform to these standards leads to the present situation of wishful thinking dominating scientific limitations.
Wishful thinking dominance has now become so widespread in the climate subset of science that many authors now regard it as the norm. People get away other it because of the difficulties of eliminating inaccuracies in the system where you cannot duplicate planet Earth for a cross check. History shows that all science progresses through periodic demonstration of cold reality over personal belief. That is a main conclusion also reached by Andy May. I suspect he has known of these error problems for some time. My own blog protests started, IIRC, about year 2007.
Those authors who play fast and free with the formalism of proper error analysis ought line up and hand in their badges. There is a long queue to get through. Geoff.

Reply to  Geoff Sherrington
September 21, 2018 8:03 am

So correct Geoff.

Editor
Reply to  Geoff Sherrington
September 26, 2018 8:07 am

+1

Pamela Gray
September 21, 2018 8:34 am

Once again the first encountered pathology is swept away for something considered to be more refined, elegant, sexy, newsworthy, and attractive to low hanging fruit. Both sides of the debate, AGW due to increasing CO2 from human sources, and minute solar changes, make the same mistake. Earth, by far, is the intrinsically variable celestial Queen and King body of change, fully equipped to throw weather patterns created from larger, variable, semi permanent oceanic and atmospheric systems out of one phase and into another. Further, these semipermanent systems themselves respond and change as a result of small and large changes to land masses as they bridge, unbridge, move, collide, and break apart into new shapes and global positions, forcing new regimes in atmospheric and oceanic weather forcing systems.

I will always slap my forehead to this nonsensical river of studies that propose the tiniest speck of an agent to be capable of sweeping the elephant out of the way so we can marvel at this tiny phenomenon apparently imbued with very powerful magic STUFF!

Pamela Gray
September 21, 2018 8:46 am

Question. In this solar thesis, what did you do to rule out the first encountered pathology in order to focus our attention on such a tiny agent of change? It is a serious error if you did not apply due diligence to this critical first step in scientific inquiry methods following an observed phenomenon. I propose this question to the proponents of the AGW thesis as well.

Reply to  Pamela Gray
September 21, 2018 9:06 am

I will answer in a short concise way.

You have your opinion and those of us who believe in solar have ours, and nobody is going to change their minds.

Pamela Gray
Reply to  Salvatore Del Prete
September 23, 2018 8:22 am

Then that would be a no? You chose not to assess whether or not an intrinsically highly and powerfully variable planet can force weather into and out of short and long term trends without needing tiny variations in solar output to provide the initial shove or keep it in a weather regime over a long period of time? If that is the case, I have no choice as a critical reader to dismiss your thesis. As I do the anthropogenic CO2 thesis.

ResourceGuy
September 21, 2018 3:11 pm

Might as well go all the way and ask how constant are ocean cycles.

Ron Obvious
September 21, 2018 5:24 pm

So I decided to do some research to see if the orbit of the earth, which is influenced by the orbit of other bodies in our solar system could be correlated in any way, shape, or form to global warming.

If you go to the JPL Horizons website you can access their data via website, telnet, and e-mail. I pulled data from their earliest time (9998-Mar-20 BC) to now by year the distance the Earth was from the sun to see much it varied. Their data, shows the following.

1) The closest the Earth has been to the sun is 1.015517355402501E+00 AU

2) The furthest the Earth has been from the sun is 1.019810319506244E+00 AU

3) The total variance is 0.00429296410374 AU or 642,218.28891 kilometers

What I am seeing is that the Earth has been 1.0155 AU from the sun (the closest distance ever) in 1799, 1886, 1894, 1905, 1913, 1924, 1932, 1943, 1989, 2000, 2008, and 2011.

Oh the last year the the Earth was 1.0195 AU away was 7695 BC. Earth’s orbit is decaying.

1.019634509765000 AU in 9000 BC
1.019411302107980 AU in 8000 BC
1.019314650436360 AU in 7000 BC
1.019299272269080 AU in 6000 BC
1.018837588321660 AU in 5000 BC
1.018891170041360 AU in 4000 BC
1.018182332875160 AU in 3000 BC
1.018579181810570 AU in 2000 BC
1.017283219501850 AU in 1000 BC
1.018192842953410 AU in 0 AD
1.016293679414170 AU in 1000 AD
1.015550417509830 AU in 2000 AD

See a trend?

https://ssd.jpl.nasa.gov/horizons.cgi

Method to replicate my findings.

1) Input the data as appropriate.

2) Generate the output.

3) Make a backup of the content.

4) Sort the data from closes to furthest:

cat export.txt | grep AD= | cut -d ” ” -f6 | sort | uniq > out.txt

5) Input your source unsorted data into the spreadsheet of your choice, add the years as appropriate. Hint, you will want to start with the year -9997 and increment until 2018.

6) Look at the data. Perhaps I could have used the barycenter of the solar system instead of the center of the sun, but that just didn’t seem right.

Their documentation can be found here:
https://ssd.jpl.nasa.gov/?horizons_doc

I specifically parsed for AD, also known as Apoapsis distance or AU, which is why you see it in my Linux command above.

Reply to  Ron Obvious
September 21, 2018 7:49 pm

The eccentricity of the Earth’s orbit is changing. The mean distance is not shrinking.
The closest have been
Closest to the center of the Sun:
2018 AD Jan 3 0.9833 AU
1001 AD Dec 11 0.9829 AU
1001 BC Nov 21 0.9821 AU
2001 BC Nov 12 0.9818 AU
8001 BC Sep 13 0.9805 AU

The dates before 1582 AD are on the Julian Calendar.

Ron Obvious
Reply to  Leif Svalgaard
September 21, 2018 8:34 pm

I agree that Earth’s orbit is more eccentric. I’m also going to say that our orbit is decaying regardless of mean, median, or mode. Below is a portion of the data from NASA’s JPL Horizon website including the dates with their date format.

9998-Mar-20: AD= 1.019558765236404E+00 <– furthest back I could go.
9915-Mar-20: AD= 1.019810319506244E+00 <–furthest from the sun.
1894-Mar-20: AD= 1.015517355402501E+00 <– Closest to the sun.
2017-Mar-20: AD= 1.016946036524143E+00 <– This is the date when I ran the data.

1.019558765236404E+00 AU – 1.016946036524143E+00 AU == 0.002612728712261 AU drift or 390858.65207099862164 KM of drift.

How about a graph.
comment image?_nc_cat=107&oh=8bada1eacc60f4b9f0cea6b11bd02f10&oe=5C626026

Reply to  Ron Obvious
September 21, 2018 9:00 pm

including the dates with their date format.
Bevare that dates before 1582 AD are in the Julian Calendar that drifts with regard to our Gregorian Calendar. So March 20th before 1582 is not the same as now.
In addition, the distance at closest approach is less than 1 AU.

Ron Obvious
Reply to  Leif Svalgaard
September 21, 2018 9:08 pm

Yup. My data is just over a year old and does not reflect the entire data set.