A Short Summary of Soon, Connolly and Connolly, 2015; “Re-evaluating the role of solar variability on Northern Hemisphere temperature trends since the 19th Century”

Guest essay by Andy May

Soon, Connolly and Connolly (2015) is an excellent paper (pay walled, for the authors preprint, go here) that casts some doubt about two critical IPCC AR5 statements, quoted below:

The IPCC, 2013 report page 16:

“Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).”

Page 17:

“It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together.”

Soon, Connolly and Connolly (“SCC15”) make a very good case that the ECS (equilibrium climate sensitivity) to a doubling of CO2 is less than 0.44°C. In addition, their estimate of the climate sensitivity to variations in total solar irradiance (TSI) is higher than that estimated by the IPCC. Thus, using their estimates, anthropogenic greenhouse gases are not the dominant driver of climate. Supplementary information from Soon, Connolly and Connolly, including the data they used, supplementary figures and their Python scripts can be downloaded here.

It is clear from all satellite data that the Sun’s output varies with sunspot activity. The sunspot cycle averages 11 years, but varies from 8 to 14 years. As the number of sunspots goes up, total solar output goes up and the reverse is also true. Satellite measurements agree that peak to trough the variation is about 2 Watts/m2. The satellites disagree on the amount of total solar irradiance at 1 AU (the average distance from the Earth to the Sun) by 14 Watts/m2, and the reason for this disagreement is unclear, but each satellite shows the same trend over a sunspot cycle (see SCC15 Figure 2 below).

Prior to 1979 we are limited to ground based estimates of TSI and these are all indirect measurements or “proxies.” These include solar observations, especially the character, size, shape and number of sunspots, the solar cycle length, cosmogenic isotope records from ice cores, and tree ring C14 estimates among others. The literature contains many TSI reconstructions from proxies, some are shown below from SCC15 Figure 8:

Those that show higher solar activity are shown on the left, these were not used by the IPCC for their computation of man’s influence on climate. By choosing the low TSI variability records on the right, they were able to say the sun has little influence and the recent warming was mostly due to man. The IPCC AR5 report, in figure SPM.5 (shown below), suggests that the total anthropogenic radiative forcing (relative to 1750) is 2.29 Watts/ m2 and the total due to the Sun is 0.05 Watts/ m2.

Thus, the IPCC believe that the radiative forcing from the Sun is relatively constant since 1750. This is consistent with the low solar variability reconstructions on the right half of SCC15 figure 8, but not the reconstructions on the left half.

The authors of IPCC AR5, “The Physical Science Basis” may genuinely believe that total solar irradiance (TSI) variability is low since 1750. But, this does not excuse them from considering other well supported, peer reviewed TSI reconstructions that show much more variability. In particular they should have considered the Hoyt and Schatten, 1993 reconstruction. This reconstruction, as modified by Scafetta and Willson, 2014 (summary here), has stood the test of time quite well.

Surface Temperature

The main dataset used to study surface temperatures worldwide is the Global Historical Climatology Network (GHCN) monthly dataset. It is maintained by the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center (NCDC). The data can currently be accessed here. There are many problems with the surface air temperature measurements over long periods of time. Rural stations may become urban, equipment or enclosures may be moved or changed, etc.

Longhurst, 2015, notes on page 77:

“…a major survey of the degree to which these [weather station] instrument housings were correctly placed and maintained in the United States was made by a group of 600-odd followers of the blog Climate Audit; … “[In] the best-sited stations, the diurnal temperature range has no century-scale trend” … the relatively small numbers of well-sited stations showed less long-term warming than the average of all US stations. …the gridded mean of all stations in the two top categories had almost no long term trend (0.032°C/decade during the 20th century), Fall, et al., 2011). “

The GHCN data is available from the NCDC in raw form and “homogenized.” The NCDC believes that the homogenized dataset has been corrected for station bias, including the urban heat island effect, using statistical methods. Two of the authors of SCC15, Dr. Ronan Connolly and Dr. Michael Connolly, have studied the NOAA/NCDC US and global temperature records. They have computed a maximum possible temperature effect, due to urbanization, in the NOAA dataset, adjusted for time-of-observation bias, of 0.5°C/century (fully urban – fully rural stations). So their analysis demonstrates that NOAA’s adjustments to the records still leave an urban bias, relative to completely rural weather stations. The US dataset has good rural coverage with 272 of the 1218 stations (23.2%) being fully rural. So, using the US dataset, the bias can be approximated.

The global dataset is more problematic. In the global dataset there are 173 stations with data for 95 of the past 100 years, but only eight of these are fully rural and only one of these is from the southern hemisphere. Combine this with problems due to changing instruments, personnel, siting bias, instrument moves, changing enclosures and the global surface temperature record accuracy is in question. When we consider that the IPCC AR5 estimate of global warming from 1880 to 2012 is 0.85°C +-0.2°C it is easy to see why there are doubts about how much warming has actually occurred.

Further, while the GHCN surface temperatures and satellite measured lower troposphere temperatures more or less agree in absolute value, they have different trends. This is particularly true of the recent Karl, et al., 2015 “Pause Buster” dataset adopted by NOAA this year. The NOAA NCEI dataset from January 2001 trends up by 0.09° C/decade and the satellite lower troposphere datasets (both the RSS and the UAH datasets) trend downward by 0.02° to 0.04° C/decade. Both trends are below the margin of error and are, statistically speaking, zero trends. But, do they trend differently because of the numerous “corrections” as described in SCC15 and Connolly and Connolly, 2014? The trends are so small it is impossible to say for sure, but the large and numerous corrections made by NOAA are very suspicious. Personally, I trust the satellite measurements much more than the surface measurements, but they are shorter, only going back to 1979.

The IPCC computation of man’s effect on climate

Bindoff, et al., 2013 built numerous climate models with four components, two natural and two anthropogenic. The two natural components were volcanic cooling and solar variability. The two anthropogenic components were greenhouse warming due mainly to man-made CO2 and methane, and man-made aerosols which cool the atmosphere. They used the models to hindcast global temperatures from 1860-2012. They found that they could get a strong match with all four components, but when the two anthropogenic components were left out the CMIP5 multi-model mean hindcast only worked from 1860 to 1950. On the basis of this comparison the IPCC’s 5th assessment report concluded:

“More than half of the observed increase in global mean surface temperature (GMST) from 1951 to 2010 is very likely due to the observed anthropogenic increase in greenhouse gas (GHG) concentrations.”

The evidence that the IPCC used to draw this conclusion is illustrated in their Figure 10.1, shown, in part, above. The top graph (a) shows the GHCN temperature record in black and the model ensemble mean from CMIP5 in red. This run includes anthropogenic and natural “forcings.” The lower graph (b) uses only natural “forcings.” It does not match very well from 1961 or so until today. If we assume that their models include all or nearly all of the effects on climate, natural and man-made, then their conclusion is reasonable.

While the IPCC’s simple four component model ensemble may have matched the full GHCN record (the red line in the graphs above) well using all stations, urban and rural, it does not do so well versus only the rural stations:

Again, the CMIP5 model is shown in red. Graph (a) is the full model with natural and anthropogenic “forcings,” (b) is natural only and (c) are the greenhouse forcings only. None of these model runs matches the rural stations, which are least likely to be affected by urban heat or urbanization. The reader will recall that that Bindoff, et al. chose to use the low solar variability TSI reconstructions shown in the right half of SCC15 Figure 8. For a more complete critique of Bindoff, et al. see here (especially section 3).

The Soon, et al. TSI versus the mostly rural temperature reconstruction

So what if one of the high variability TSI reconstructions, specifically the Hoyt and Schatten, 1993

reconstruction, updated by Scafetta and Willson, 2014; is compared to the rural station temperature record from SCC15?

This is Figure 27 from SCC15. In it all of the rural records (China, US, Ireland and the Northern Hemisphere composite) are compared to TSI as computed by Scafetta and Willson. The comparison for all of them is very good for the twentieth century. The rural temperature records should be the best records to use, so if TSI matches them well; the influence of anthropogenic CO2 and methane would necessarily be small. The Arctic warming after 2000 seems to be amplified a bit, this may be the phenomenon referred to as “Polar Amplification.

Discussion of the new model and the computation of ECS

A least squares correlation between the TSI in Figure 27 and the rural temperature record suggests that a change of 1 Watt/m2 should cause the Northern Hemisphere air temperature to change 0.211°C (the slope of the line). Perhaps, not so coincidentally, we reach a value of 0.209°C assuming that the Sun is the dominant source of heat. That is, if the average temperature of the Earth’s atmosphere is 288K and without the Sun it would be 4K, the difference due to the Sun is 284K. Combining this with an average TSI of 1361 Watts/m2 means that 1/1361 is 0.0735% and 0.0735% of 284 is 0.209°C per Watt/m2. Pretty cool, but this does not prove that TSI dominates climate. It does, however, suggest that Bindoff et al. 2013 might have selected the wrong TSI reconstruction and, perhaps, the wrong temperature record. To me, the TSI reconstruction used in SCC15 and the rural temperature records they used are just as valid as those used by Bindoff et al. 2013. This means that the IPCC and Bindoff’s assertion that anthropogenic greenhouse gases caused more than half of the warming from 1951 to 2010 is questionable. The sound alternative, proposed in SCC15, is just as plausible.

The SCC15 model seems to work, given the data available, so we should be able to use it to compute an estimate of the Anthropogenic Greenhouse Warming (AGW) component. If we subtract the rural temperature reconstruction described above from the model and evaluate the residuals (assuming they are the anthropogenic contribution to warming) we arrive at a maximum anthropogenic impact (ECS or equilibrium climate sensitivity) of 0.44°C for a doubling of CO2. This is substantially less than the 1.5°C to 4.5°C predicted by the IPCC. Bindoff, et al., 2013 also states that it is extremely unlikely (emphasis theirs) less than 1°C. I think, at minimum, this paper demonstrates that the “extremely unlikely” part of that statement is problematic. SCC15’s estimate of 0.44°C is similar to the 0.4°C estimate derived by Idso, 1998. There are numerous recent papers that compute ECS values at the very low end of the IPCC range and even lower. Fourteen of these papers are listed here. They include the recent landmark paper by Lewis and Curry, and the classic Lindzen and Choi, 2011.

Is CO2 dominant or TSI?

SCC15 then does an interesting thought experiment. What if CO2 is the dominant driver of warming? Let’s assume that and compute the ECS. When they do this, they extract an ECS of 2.52°C, which is in the lower half of the range given by the IPCC of 1.5°C to 4.5°C. However, using this methodology results in model-data residuals that still contain a lot of “structure” or information. In other words, this model does not explain the data. Compare the two residual plots below:

The top plot shows the residuals from the model that assumes anthropogenic CO2 is the dominant factor in temperature change. The bottom plot are the residuals from comparing TSI (and nothing else) to temperature change. A considerable amount of “structure” or information is left in the top plot suggesting that the model has explained very little of the variability. The second plot has a little structure left and some of this may be due to CO2, but the effect is very small. This is compelling qualitative evidence that TSI is the dominant influence on temperature and CO2 has a small influence.

The CAGW (catastrophic anthropogenic global warming) advocates are quite adept at shifting the burden of proof to the skeptical community. The hypothesis that man is causing most of the warming from 1951 to 2010 is the supposition that needs to be proven. The traditional and established assumption is that climate change is natural. These hypotheses are plotted below.

This is Figure 31 from SCC15. The top plot (a) shows the northern hemisphere temperature reconstruction (in blue) from SCC15 compared to the atmospheric CO2 concentration (in red). This fit is very poor. The second (b) fits the CO2concentration to the temperature record and then the residuals to TSI, the fit here is also poor. The third plot (c) fits the temperatures to TSI only and the fit is much better. Finally the fourth plot (d) fits the TSI to the temperature record and the residuals to CO2 and the fit is the best.

Following is the discussion of the plots from SCC15:

“This suggests that, since at least 1881, Northern Hemisphere temperature trends have been primarily influenced by changes in Total Solar Irradiance, as opposed to atmospheric carbon dioxide concentrations. Note, however, that this result does not rule out a secondary contribution from atmospheric carbon dioxide. Indeed, the correlation coefficient for Model 4[d] is slightly better than Model 3[c] (i.e., ~0.50 vs. ~0.48). However, as we discussed above, this model (Model 4[d]) suggests that changes in atmospheric carbon dioxide are responsible for a warming of at most ~0.12°C [out of a total of 0.85°C] over the 1880-2012 period, i.e., it has so far only had at most a modest influence on Northern Hemisphere temperature trends.”

This is the last paragraph of SCC15:

“When we compared our new [surface temperature] composite to one of the high solar variability reconstructions of Total Solar Irradiance which was not considered by the CMIP5 hindcasts (i.e., the Hoyt & Schatten reconstruction), we found a remarkably close fit. If the Hoyt & Schatten reconstruction and our new Northern Hemisphere temperature trend estimates are accurate, then it seems that most of the temperature trends since at least 1881 can be explained in terms of solar variability, with atmospheric greenhouse gas concentrations providing at most a minor contribution. This contradicts the claim by the latest Intergovernmental Panel on Climate Change (IPCC) reports that most of the temperature trends since the 1950s are due to changes in atmospheric greenhouse gas concentrations (Bindoff et al., 2013).”

Conclusions

So, SCC15 suggests a maximum ECS for a doubling of CO2 is 0.44°C. They also suggest that of the 0.85°C warming since the late-19th Century only 0.12°C is due to anthropogenic effects, at least in the Northern Hemisphere where we have the best data. This is also a maximum anthropogenic effect since we are ignoring many other factors such as varying albedo (cloud cover, ice cover, etc.) and ocean heat transport cycles.

While the correlation between SCC15’s new temperature reconstruction and the Hoyt and Schatten TSI reconstruction is very good, the exact mechanism of how TSI variations affect the Earth’s climate is not known. SCC15 discusses two options, one is ocean circulation of heat and the other is the transport of heat between the Troposphere and the Stratosphere. Probably both of these mechanisms play some role in our changing climate.

The Hoyt and Schatten TSI reconstruction was developed over 20 years ago and it still seems to work, which is something that cannot be said of any IPCC climate model.

Constructing a surface temperature record is very difficult because this is where the atmosphere, land and oceans meet. It is a space that usually has the highest temperature gradients in the whole system, for example the ocean/atmosphere “skin” effect. Do you measure the “surface” temperature on the ground? One meter above the ground? One inch above the ocean water, at the ocean surface or one inch below the ocean surface in the warm layer? All of these temperatures will always be significantly different at the scales we are talking about, a few tenths of a degree C. The surface of the Earth is never in temperature equilibrium.

From Essex, et al., 2007:

“While… [global mean surface temperature] is nothing more than an average over temperatures, it is regarded as the temperature, as if an average over temperatures is actually a temperature itself, and as if the out-of-equilibrium climate system has only one temperature. But an average of temperature data sampled from a non-equilibrium field is not a temperature. Moreover, it hardly needs stating that the Earth does not have just one temperature. It is not in global thermodynamic equilibrium — neither within itself nor with its surroundings.”

From Longhurst, 2015:

“One fundamental flaw in the use of this number is the assumption that small changes in surface air temperature must represent the accumulation or loss of heat by the planet because of the presence of greenhouse gases in the atmosphere and, with some reservations, this is a reasonable assumption on land. But at sea, and so over >70% of the Earth’s surface, change in the temperature of the air a few metres above the surface may reflect nothing more than changing vertical motion in the ocean in response to changing wind stress on the surface; consequently, changes in sea surface temperature (and in air a few metres above) do not necessarily represent significant changes in global heat content although this is the assumption customarily made.”

However, satellite records only go back to 1979 and global surface temperature measurements go back to 1880, or even earlier. The period from 1979 to today is too short to draw any meaningful conclusions given the length of both the solar cycles and the ocean heat distribution cycles. Even the period from 1880 to today is pretty short. We do not have the data needed to draw any firm conclusions. The science is definitely not settled.

0 0 votes
Article Rating
167 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Geckko
October 8, 2015 8:52 am

Those confidence statements taken together imply that the IPCC thinks it more likely that the climate sensitivity is greater than 6 degrees, than less than 1 degree.
The skew that implies is just ludicrous.

RWturner
Reply to  Geckko
October 8, 2015 2:01 pm

And furthermore, there has only been 0.8 degrees of warming observed with approximately a 42% increase in CO2. That would mean that the remaining 0.7-3.7 (high confidence) or 5.2 degrees (medium confidence) of warming would need to occur as a result of the remaining 58% increase in CO2. This, of course, is exactly the opposite of where most of the forcing from the greenhouse gases take place since there are strongly diminishing returns. Their “science” is so bad it’s criminal.

October 8, 2015 8:58 am

Time will tell. I am in a wait and see mode.

Reply to  Salvatore Del Prete
October 8, 2015 11:09 pm

In the meantime, I notice the record heat temperatures by NWS, the melting Glaciers from when I was a Ranger in GNP Not to mention worldwide, NASA reports that there is more particulate in the stratsphere than troposphere, lack of significant snow for 3 years, the Union of Concerned Scientists’ opinion and theoriy, the lab confirmation at Berkely CA and the jet aerosol spraying proven to take place at geoengineeringwatch.org and my own data. Regardless of Willie Hocks Soon work, somebody needs to explain why the obvious signs and data of warming! Just hollering it aint so in Chinese does not chage the data that proves something bad is going on…
And the military spends $25 biliion of your taxes to spray for SOME reason….

latecommer2014
Reply to  francis Mangels
October 9, 2015 8:47 am

Francis if you do a short study of the advance and retreat of glaciers over geological times some of your fears may be controlled. Using your lifetime to determine climate trends is always risky since cycles are often measured in hundreds or thousands of years.

rishrac
Reply to  francis Mangels
October 15, 2015 4:11 am

The obvious signs and data warming…. OK I’ll explain it to you. It isn’t the relationship between co2 and temperature. The math is simply wrong. Cherry picking dates or not, the fact is that current Temps are below the lowest levels projected by the IPCC. Since they’ve cut off debate about past cooling and warming, like the MWP or the LIA, my concern isn’t warming it’s cooling. If AGW works the way it’s suppose to then real Temps are in fact falling. As stated above there has been a 0.8 degree increase, half of that by the IPCC has been attributed to natural forces, which leaves the math even more wrong. If AGW works the way the IPCC says it does, we are in deep do do because Temps are falling and fast.if the Temps were suppose to go up by even 1 degree, and you only have a 0.4 increase, another drop of 0.6 would put us in negative real numbers. In view of the amount of co2 that we’ve put in the atmosphere, in this case, the drop is significant.
So far the only graph that is pretty close is the harmonic one, and if warming occurs next year due to El nino, then it will be exactly on track. Or at least within the ranges. BTW, it too has the highest level of the wave below the IPCC s lowest projection.

Catcracking
Reply to  Salvatore Del Prete
October 9, 2015 10:41 am

Wait and see?
How old are you, when do you expect a doubling will occur?
I don’t expect to be here and can’t live with this nonsense.

tomwys1
October 8, 2015 9:04 am

It is likely that ECS turns out to be inconsequential, as the major “terror rationale” is Sea-Level Rise that “threatens” coastal regions throughout the world.
Trouble is that a methodical slight unwavering rise due mostly to thermal expansion has not accelerated AT ALL as measured by tide gauges in tectonically inert (neither rising or subsiding) areas around the world.
Why is this lack of acceleration significant? Because CO2 has risen 38% over the past 130 years and NO acceleration is visible. If a 38% increase cannot affect Sea-Level acceleration, then decreasing CO2 won’t either!!!
Spend the $billions$ elsewhere, electrify the world, and light up humankind.
Thank Soon, Connolly & Connolly for turning on the light switch!!!

October 8, 2015 9:10 am

If the Hoyt & Schatten reconstruction and our new Northern Hemisphere temperature trend estimates are accurate
This is the money quote. The Hoyt $ Schatten TSI reconstruction is NOT accurate, so any conclusions drawn from assuming that it is, are suspect.

Reply to  lsvalgaard
October 8, 2015 9:36 am

Any conclusion that ECS is less than 1.2 is also suspect, because the dominant water vapor feedback is positive. The question is only by how much. All the recent observational energy budget studies suggest something between 1.45 and 1.65. NOT 0.44.

A C Osborn
Reply to  ristvan
October 8, 2015 9:54 am

” because the dominant water vapor feedback is positive.”
A very unscientific statement, you obviously have not been following Willis’ Thunderstorm Posts.
All forms of water are a Stabalising influence not a “Positive Feedback”.

A C Osborn
Reply to  ristvan
October 8, 2015 9:55 am

Stabilising even.

TonyL
Reply to  ristvan
October 8, 2015 10:49 am

Careful – Thin Ice Alert
Water vapor has the same spectroscopic problem as CO2, the absorption bands are saturated, and to a far greater extent. To have a temperature feedback effect, you need to increase the lower atmosphere Absolute Humidity, (RH does not cut it) and that does not seem to be happening. The TAO buoy array in the Pacific has some great data, which Willis did a great workup on.
Clouds – Say no more.
At the end of it all, water will be shown not to participate in any “forcings” or “feedbacks”. I think there is absolutely no evidence that water vapor is driven by CO2 in any way.

Reply to  ristvan
October 8, 2015 3:04 pm

ristvan writes

because the dominant water vapor feedback is positive. The question is only by how much.

So the IPCC and AGW enthusiasts say. But what is this claim really based on?

Reply to  lsvalgaard
October 8, 2015 9:48 am

Which reconstruction do you consider most accurate?

billw1984
Reply to  Michael Palmer
October 8, 2015 3:11 pm

His own!

HAS
Reply to  lsvalgaard
October 9, 2015 5:46 pm

“The Hoyt $ Schatten TSI reconstruction is NOT accurate”
Out of curious where does it fail and which (if any) is your preferred one? Does that one show a similar better relationship with the temp series than CO2?

Pamela Gray
Reply to  HAS
October 10, 2015 9:28 am

As far as I can see from the paper, Hoyt and Schatten’s go-to graph is a statistically manipulated temperature data set smoothed with an 11 year running average, compared to a model of TSI changes likely also manipulated with an 11 year running average or bin equation.
In my opinion, this graph very likely demonstrates auto-correlation between two data sets, one being an 11 year modeled TSI output not observations, and the other being 11-year smoothed temperature data to match an 11 year solar cycle. In other words, they appear to be manipulated with obvious similar re-calculations forcing a match, which renders the graph useless. Except as a warning to others about autocorrelation.
Things brings me to one of my long-held beliefs, having done research and statistical analysis. We should not let folks near research unless we have independent statistical audits as part of peer review. My work had such an audit.

Pamela Gray
Reply to  HAS
October 10, 2015 9:33 am

Damn. Head thinks of one word, fingers type another. Replace “Things” with “This”.

James
October 8, 2015 9:23 am

“This is also a maximum anthropogenic effect since we are ignoring many other factors such as varying albedo (cloud cover, ice cover, etc.) and ocean heat transport cycles.”
This is not true. If the anthropogenic effect is identified as a residual, then unmodeled factors can have a positive or negative effect on the residual.

R2Dtoo
Reply to  James
October 8, 2015 5:25 pm

Which makes the science totally “unsettled”.

October 8, 2015 9:23 am

{The authors of IPCC AR5, “The Physical Science Basis” may genuinely believe that total solar irradiance (TSI) variability is low since 1750. But, this does not excuse them from considering other well supported, peer reviewed TSI reconstructions that show much more variability. In particular they should have considered the Hoyt and Schatten, 1993 reconstruction. This reconstruction, as modified by Scafetta and Willson, 2014 (summary here), has stood the test of time quite well.}
Why should the IPCC, a government sponsored agency, consider any science that doesn’t support their agenda. If one thinks that just because a human is a ‘scientist’, then that human is a pure, ethical, moral, honest, etc., and ALWAYS performs in the expected manner of a holy ‘scientist’. Some of the commenters and authors seem to think so – IMO, you have no conception of human behavior, regardless of profession.

Gerry, England
Reply to  kokoda
October 8, 2015 3:56 pm

The IPCC remit is to prove that human activity causes global warming so will deliberately ignore any contrary information.

rgbatduke
October 8, 2015 9:35 am

Yeah, this drives me nuts too. We can’t build a light meter accurate to within one lousy percent? And we call ourselves “scientists”… Pshaw.
The same problem exists for CERES EBAF. It’s like, “why bother”. We can’t even measure the radiative balance, or lack thereof, at the top of the atmosphere. What prayer do we have, then, of actually being able to answer any of the questions above?
There is this general “consensus” among climate scientists that if anomalies were good enough for Grampaw, they are good enough for me. Sadly, all of the physics involved in detailed balance of an open system deal with absolute quantities including absolute temperature, not anomalies in degrees centigrade!
And not even NASA GISS is willing to claim that we can “measure” (model compute from poorly sampled, sparse data) the actual global average temperature, today, to within one whole degree absolute. The error on absolute temperature is greater than the entire cumulated anomaly from 1850, although the acknowledged range of the error there by HadCRUT4 is 0.3 + 0.1 = 0.4 C, roughly half of the entire supposed increase.
But it really makes me sad that even satellites sent up in the last fifteen or twenty years have such lousy instrumentation and a lack of recalibration in situ that we cannot resolve TSI with CERES within 2% or upwelling LWIR within 1%, and hence cannot come close to answering the very first question we should ever have asked when considering global warming — is the incoming solar energy out of balance with the outgoing solar energy?
This is not a stupid question. It takes decades for a tiny imbalance to have an effect on temperature. The models (IIRC) are asserting an imbalance of around 0.5 W/m^2 at the TOA to produce the slow general warming observed if one ignores hiatuses and the like over a single stretch of 20 years out of the last 75. Surely we should be able to easily resolve an imbalance of this scale, but we cannot even get close, off by a double overlapping order of magnitude.
It’s enough to make one throw up one’s hand in disgust, and it is the reason that I cannot quite consider the article above a slam dunk. Out of necessity, it is dealing with correlations of anomalies, not correlations of absolutes. I could wax poetic about this, but I don’t have time and besides, no one listens.
I’ll conclude with some perspective. The entire global anomaly from 1850 to today is smaller than the difference in measured absolute average temperatures observed at specific “official” sites located well within a 100 km radius of my seat at this moment. It is literally indistinguishable from spatial noise averaged over the entire interval. Clearly, this is an ongoing catastrophe!
I sometimes despair of my species.
rgb

Bob Ryan
Reply to  rgbatduke
October 8, 2015 9:54 am

Don’t despair I listen Robert and I agree – 100%.

Reply to  rgbatduke
October 8, 2015 10:58 am

Yes, of course it would be nice to have high-precision measurements of incoming and outgoing irradiation and be able to measure the greenhouse effect unambiguously. However, I do think that the limited precision data have been put to good use by Spencer and by Lindzen, who have correlated fluctuations in the radiation budget with changes of bottom of the atmosphere temperatures to obtain (low) estimates of climate sensitivity. Do you see anything wrong with their work?

Reply to  rgbatduke
October 8, 2015 12:11 pm

I could be considered to be a specimen of species that Dr. Brown often despairs of.
Having that in mind, I compare three variables here
http://www.vukcevic.talktalk.net/CSAT.gif
It can be seen that there is a partial correlation between all three.
It is said that solar activity doesn’t change enough to explain rise in the temperature, but graph suggests opposite, despite correlation being ‘transitory’.
It is almost certain that the CET cannot drive the far North Atlantic tectonics, and yet there is partial correlation there too. However, the opposite is not only possible but even likely.
But to the dismay of those that are a bit more advanced specimens, there is also partial correlation between solar activity and the tectonics; even worse
http://www.vukcevic.talktalk.net/Arctic.gif
Since we know that Arctic atmospheric pressure is the north component of the NAO, which is considered to control the polar jet stream’s meandering, a less advancement specimen of species may conclude:
Solar periodic activity and tectonics are related through common parentage (but as usual two siblings are not identical), tectonics drives change in the Arctic atm. pressure and the jet stream governs the temperature changes in the N. Hemisphere.
I’m done for now.

richard verney
Reply to  vukcevic
October 9, 2015 12:46 am

I have a problem with partial correlation.
Either something correlates or it does not. It is an all or nothing matter, if something only partially correlates it is akin to being partially pregnant. Something that only partially correlates, does not correlate – period.
If something does not correlate, but it is claimed that there is partial correlation, I prefer using the expression ‘there are some similarities’ or some such equivocal expression.
Many people, at least on a sub conscious level, equate correlation with causation. Of course, correlation is not causation, but due to the sub conscious connection using expressions like partial correlation tend to over weight the conclusion to be drawn from the data presentation.
PS. I always like reading your comments. They always give something to think about, even if, at times, they may be no more than curve fitting, or coincidence.

Reply to  vukcevic
October 9, 2015 8:39 am

The earth’s magnetic field exerts tectonic and meteorological influences that are independent of the thermal energy imparted by electromagnetic radiation and cannot be measured in watts per square meter.
Leif will be happy to agree that the sun influences, and possibly even creates the earth’s magnetic field.

Reply to  vukcevic
October 9, 2015 8:54 am

Hi Mr Verney
You need to read Dr. Brown (rgb@duke) more thoroughly.
One of his important statements is:
Earth climate change is a multi-variant process.
Pregnancy tends to have (perhaps except in vitro) an invariant and unique cause, a one off shot, so to say. As a WUWT experienced and valued commentator, you are well aware that CAGW crowd is a ‘one shot idea’; only persistence of the current pause is forcing them to think of natural variability.
Natural variability is one part of the rgb’s multivariance, with other knowns or unknowns.
Nature has its many cycles, solar lunar-tidal, and possibly some related to internal geo-dynamics, etc. Some go up others go down and so on, ad infinitum.
Climate (temperatures) may respond to the whole bunch of them to a different degree of phase and intensity changes.
In my graph’s partial correlations, that you appear to have a problem with, only two external variables are considered, solar and tectonic activity. If I had continuous and not partial correlations I not only would be greatly surprised but would reject it as ‘no good’.
I happen to like results with partial correlations (you are in good company, Dr. Svalgaard always instantly reject any as ‘nonsense’) because that is how the multi-variant causes drive natural world.
Despite your ‘problem’ with it, I will carry on as before, equally I will regularly read your comments, since most of time are valuable contribution, as are those from Dr. Svalgaard and Dr. Brown; more often than not, tell me that I am totally wrong.
All the best. m.v.

richard verney
Reply to  vukcevic
October 9, 2015 11:04 am

Thanks tour comment.
I am well aware that there is variability due to natural processes, and that there is interaction between these natural process which can at time cancel each other out (in whole or part) or can amplify/add to the effect (in whole or part). But this does not alter my point.
Anyone who proffers a theory based upon the actions of X, and claims that X does Y and then produces data with a plot showing the relationship between X and Y, as soon as X does not do Y in that plot, an explanation is needed as to why during the period of discrepancy X has not done Y. There may be a valid explanation consistent with the underlying proposition X does Y, but if no identifiable reason (the reason being backed up with appropriate data) is put forward then the claim that X does Y is undermined.
In global warming the claim is that CO2 is a ghg such that an increase in CO2 will cause temperatures to rise. CO2 is not sometimes a ghg, and at other times not a ghg, CO2 will not sometimes lead to warming of the atmosphere, and at other times will cause no warming of the atmosphere but instead will cause the oceans to warm. Whatever the properties of CO2 are, they will be and always will be until the effects of CO2 are saturated.
So each year that CO2 increases there must be a corresponding increase in temperature, unless some other process is at work nullifying the effect of the increase in CO2. This means that each year when C02 increases and there is not a corresponding increase in temperature consistent with whatever Climate Sensitivity is claimed for CO2, an explanation is required as to why the theoretical predicted increase in temperature did not occur. This may be that there was a volcano, or that it was a particularly cloudy year, or that it was a particularly snowy year, or the snow melted later than usual, or that it was a La Nina year, or that oceanic cycles led to cooling, or delayed oceanic heat sinks or whatever explains why there was no increase in temperature that particular year/that particular period. It may be that no explanation can be offered and the proponent is left to fall back on ‘it was natural variation that cancelled out the warming or even exceeded the warming from CO2 leading to cooling’. But natural variation is the last refuge of the theorist since it merely confirms that we do not what is going on or why.
I do not like partial correlation because this raises questions, but if there is partial correlation then an explanation is required for every period where there is not correlation and this explanation must be consistent with the underlying theory that is being proposed. That explanation may open a can of worms, since if that explanation holds good for the period where there was no primary (first order correlation), what effect does that explanation have on periods where there is correlation. It may be that the explanation given for periods where there is no correlation, would if applied uniformly move periods where there is correlation into periods where there should/would be no correlation.
I do not want you to do things differently I always enjoy and consider your posts. Obviously Dr Svalgaard and Dr Brown rarely raise points that lack merit, but given that so little is known or understood about the Earth’s climate and what drives it, it does not necessarily follow that points/issues raised that have obvious merit are in fact correct. Only time and with it better knowledge and understanding will tell.
This ‘science’ is problematic since it is based upon the over extrapolation of poor quality data, often accompanied by dodgy statistical analysis and/or with a blinkered imagination that there must always be some linear fit. The fact is that quality data, fit for purpose, is thin on the ground, and that is why we are debating issues. If the data was good, I am sure that there would be no or little extensive genuine debate.

Reply to  vukcevic
October 9, 2015 11:48 am

I will quote Donald Rumsfeld:
“There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”
As long as our knowledge is such, I am entirely content with partial correlations, but for anyone else it is matter of a personal choice.

Richard of NZ
Reply to  rgbatduke
October 8, 2015 12:11 pm

Don’t worry Robert. You are obviously of the species Homo Sapiens Sapiens. The species of the ones you worry about is in doubt but the sapiens sapiens bit is definitely missing.

rogerknights
Reply to  Richard of NZ
October 8, 2015 6:29 pm

Richard of NZ October 8, 2015 at 12:11 pm
Don’t worry Robert. You are obviously of the species Homo Sapiens Sapiens. The species of the ones you worry about is in doubt ….

Homo Simp.
(Not mine–I read it somewhere. (Mencken?) I suspect that “Homer Simpson” was inspired by “Homo Simp.”)

u.k.(us)
Reply to  rgbatduke
October 8, 2015 1:55 pm

“I could wax poetic about this, but I don’t have time and besides, no one listens.”
=============
Nah, we listen, sometimes even comprehend.
Then again, there are those pesky squirrels that need to be dealt with.
My weak ass BB gun will teach them a stinging lesson they might never forget. 🙂

DD More
Reply to  rgbatduke
October 8, 2015 2:04 pm

And then they also try to equate temperature to heat, and seem to forget without including the water content.
Notice that water vapor, once generated, also requires more heat than dry air to raise its temperature further: 1.84 kJ/kg.C against about 1 kJ/kg.C for dry air.
The enthalpy of moist air, in kJ/kg, is therefore:
h = (1.007*t – 0.026) + g*(2501 + 1.84*t)
g is the water content (specific humidity) in kg/kg of dry air

http://www.mhi-inc.com/PG2/physical_properties_for_moist_air.html

rishrac
Reply to  DD More
October 8, 2015 2:21 pm

One wonders how it could ever rain, much less snow. All that retained heat continually heating up the atmosphere.

bit chilly
Reply to  DD More
October 9, 2015 2:37 am

rishrac, remember snow is a thing of the past , rain will soon follow /

Editor
Reply to  rgbatduke
October 8, 2015 2:37 pm

Thank you, well said.

Alf
Reply to  rgbatduke
October 8, 2015 4:24 pm

Rgbatduke; I am a simple lay person but I have developed a great respect for your opinion. Not only do you have a way with words your ideas as far as I can gather make a lot of sense. You need to have a higher public profile since your way of saying things make a lot of sense. You get my vote for president assuming your common sense is characteristic of all your ideas.

rgbatduke
Reply to  Alf
October 9, 2015 8:19 am

“If nominated, I will not run. If elected, I will not serve.”
I would much rather remove my own appendix with a rusty razor and dental floss than be POTUS.
Y’know the kids in high school who participated in everything, who were everybody’s friend, who ran for student “office” like class president, who have neatly coiffed hair (male or female alike) and a broad expanse of smiling perfect teeth in their yearbook photo? The ones who remembered everybody’s name (even the ones they really didn’t like but smiled at anyway)?
I’m not one of those kids. I haven’t quite forgotten my own kids’ names, but I do have to work to get to all of my nieces and nephews (whom I actually do like) and my cousins are pretty much out of the question. Like honey badger, I just don’t give a s**t.
Besides, I’m busy.
rgb

Reply to  rgbatduke
October 9, 2015 1:06 am

rgb,
The 14 Watts/m2 disagreement among the satellite measurements of TSI is quite alarming.
If such a fundamental quantity has this 1% uncertainty, then I suggest that all climate scientists (and skeptics) should be more cautious with their claims.

Reply to  StefanL
October 9, 2015 3:29 pm

It’s a calibration problem. If your 1 kg scales weight or metric tape measure is suspect you can go to International Bureau of Weights and Measures in Paris and check it out, if you are uncertain about accuracy of your wrist watch you can go to Greenwich (UK) and wait at noon (in the winter or 1pm in summer) for the copper ball to drop down the pole.
I have no idea where you could go to calibrate satellite instrumentation.

Brett Keane
Reply to  rgbatduke
October 10, 2015 5:31 am

RGB, thankyou, dead right you are that the data is not good enough. Wish we could all admit this and stop havering!

Joe
October 8, 2015 9:37 am

The entire AGW argument is invalidated by IPCC’s own statement: ‘more than half of the observed increase … from 1951 to 2010’.
There was never any argument that the direct impact of CO2 doubling in the 300-600 ppm range was large enough to be of significance. The whole argument for large impact relied on feedback. To argue that feedback will amplify the effect of CO2 for large warming also required arguing that there is no natural variation, otherwise the same feedback mechanisms will amplify the natural signal as well (this why the hockey stick paper was their holy grail).
By stating that half and not the entire 1950-2010 warming is anthroprogenic forcing (CO2 and other) also means that there is natural variation, so they cannot claim that feedback amplifies the CO2 signal but not the NV signal.

Bernard Lodge
Reply to  Joe
October 8, 2015 12:49 pm

Joe says:
“There was never any argument that the direct impact of CO2 doubling in the 300-600 ppm range was large enough to be of significance.”
How can you say that – there is a massive argument – which is simple to explain and understand ….
CO2 is a radiative gas that emits LWIR in all directions, including upwards into space. Forget the CAGW obsession with downward emissions for a moment – if you increase the atmospheric CO2, you obviously increase upward emissions into space which are lost forever. This means the planet as a whole is COOLED by increasing atmospheric CO2, not warmed!
It really is that simple.

Reply to  Bernard Lodge
October 8, 2015 4:34 pm

if you increase the atmospheric CO2, you obviously increase upward emissions into space which are lost forever.

If there were no ghgs in the atmosphere upward emissions would emanate directly from the surface – and still be lost forever. It is the altitude at which energy is emitted which governs the amount of warming from the greenhouse effect.
I’m sceptical of CAGW but the lack of understanding regarding basic greenhouse theory among a lot of sceptics is worrying.

Reply to  Bernard Lodge
October 9, 2015 1:20 am

John Finn,
some claim that w/out ghgs the air would be heated by diret contact with the surface, As w/out ghgs there is no means for the heat to escape exept through contact with the surface at a cooler place, the air and the surface would heat up considerately.

Roger Clague
Reply to  Bernard Lodge
October 9, 2015 9:08 am

There are always some molecules with 3 or more atoms ( IR active, GHG’s ) in our or any atmosphere.

rgbatduke
Reply to  Bernard Lodge
October 9, 2015 9:48 am

CO2 is a radiative gas that emits LWIR in all directions, including upwards into space. Forget the CAGW obsession with downward emissions for a moment – if you increase the atmospheric CO2, you obviously increase upward emissions into space which are lost forever. This means the planet as a whole is COOLED by increasing atmospheric CO2, not warmed!
It really is that simple.

Well, no, it really isn’t that simple. If you add CO2 to the atmosphere you raise the scale height where the emissions occur. Since it is colder up there, you radiate less. The atmosphere is already saturated with CO2 and is completely opaque to LWIR in its absorptive band after a distance of a few meters, a distance that gradually increases as one goes up the atmospheric column to lower overall density and pressure and temperature until LWIR photons have an even chance of getting out to space. The emissions at that height are at intensities characteristic of that height.
If you care about being correct, as opposed to parroting something that is quite simply wrong, I can recommend either an AJP review paper by Wilson and Gea-Banacloche (AJP 80, p 306, 2012) or the lovely spectrographs in Grant Petty’s book “A First Course in Atmospheric Radiation” (well worth the investment if you know enough physics to be able to follow it, that is if you are/were at least a physics minor and have a clue about quantum mechanics) or you can read Ira Glickstein’s review article on the subject on WUWT, that summarizes Petty (for the most part) for a lay audience. But in the end, it comes down to what I described above. LWIR from the surface is immediately absorbed by CO2 on its way to infinity. It is immediately transferred by collisions to O2 and N2, nearly always before the CO2 re-emits the same quantum of energy it absorbed, effectively maintaining the temperature of the air to correspond with that of the ground so that it stays close to in balance near the surface (close to but usually lagging or leading a bit as things heat or cool during the day).
In the bulk of the atmospheric column it has almost no net effect — each parcel of air is radiating to its neighbors on all sides and absorbing almost the same amount from its neighbors on all sides, with a small temperature effect that makes radiation up slightly unbalanced relative to radiation down (but only a bit). Again, it just helps promote “local” equilibrium in the atmosphere as it causes slightly warmer parcels to lose energy to slightly cooler parcels and reduce the thermal gradient between them. So we can pretty much ignore the greater part of the atmospheric column as a zero sum game; even the slight upward diffusion of energy is just one part of vertical energy transport, and often not the most important one.
The really important second factor in the atmospheric radiative effect (misnamed “greenhouse effect”) is convection. Parcels of air in the atmosphere are almost always moving, moving up, moving down, moving sideways. Since conduction in air is poor, since radiation by parcels of air to their neighbors is mostly balanced, as a parcel rises or falls it does so “approximately” without exchanging energy with its neighbors. This is called an adiabatic process in physics. A gas that adiabatically expands cools as it does so, just as a gas that is adiabatically compressed warms as it does so. This is all well understood and taught in intro physics courses, at least to majors — not so much to non-physics majors any more. The turbulent mixing as the atmosphere warms at the base, rises, cools (by losing its energy to space, not adiabatically) and falls creates a thermal profile that approximates the adiabatic temperature profile expected on the basis of the pressure/density profile. The temperature drop per meter is called the Adiabatic Lapse Rate and while it is not constant (it changes with humidity, for example), it is always in an understandable range. It is why mountain temperatures are colder than sea level temperatures, and why even in the tropics, pilots in WWII had to wear warm clothes when they flew in open aircraft at 15,000+ feet.
The ALR exists from ground level to the top of the troposphere, a layer called the tropopause. Above the tropopause is the stratosphere, where temperatures level because there is no driving bottom heating any more, air flow tends to be lateral, and temperatures start to slowly rise with height but in air so thin that the “temperature” of the air is all but meaningless as far as human perceptions are concerned. Also it starts out at the temperature of the tropopause, which is very, very cold relative to the ground.
CO2 doesn’t radiate away to space (as opposed to other air) until one gets well up close to the top of the troposphere. There, it radiates energy that is net transferred to the CO2 by collisions with the surrounding O2 and N2 away to infinity, and thereby cools the air. The air’s density increases, it falls as part of the turbulent cycle that maintains the ALR and energy has been transported from the ground to space.
The trouble is that the power lost to space in the absorption band, radiated from near the cold tropopause, is much, much smaller than the power radiated from the Earth’s surface that was originally absorbed. It is left as an exercise in common sense to remember that energy is conserved, the Earth is a finite system, and that on average input energy from the sun must be exactly balanced by radiative losses to infinity (detailed balance). When it is NOT in detailed balance, it either warms until it is (as outgoing radiation increases with temperature) or cools until it is (as outgoing radiation decreases with temperature). It doesn’t even really matter where this warming of the system occurs (atmosphere or land/ocean surface, since the ALR maintains the same gradient of temperatures in the atmosphere to the surface, warming (on average) anywhere in the atmosphere ultimately means surface warming and vice versa. Fortunately, the surface loses energy in bands not blocked by CO2, so as it warms it quickly restores detailed balance.
So, the presence of CO2 in the atmosphere takes a bite out of the outgoing power in one band of the spectrum, all things being equal. So, test time (to see if you’ve been paying attention or have understood a work I’ve said).
Since energy in from Mr. Sun is independent of the increase of CO2, we expect the Earth to be:
a) Warmer when there is CO2 in the atmosphere relative to an atmosphere consisting only of non-radiating (in LWIR) O2 and N2.
b) Cooler when there is CO2 in the atmosphere relative to an atmosphere consisting only of non-radiating (in LWIR) O2 and N2.
c) Not change temperature when there is CO2 in the atmosphere relative to an atmosphere consisting only of non-radiating (in LWIR) O2 and N2.
I already described the expected marginal response to increasing CO2 concentrations above. Note well that they are not linear — the atmosphere is already very, very opaque to CO2 so adding a bit more “trace gas” won’t make it significantly MORE opaque anywhere near the ground. Nor does it produce much of a change in its broadening of spectral lines — pressure broadening isn’t strongly dependent on partial pressure, only on collision rates without much caring about the species. All it does is raise the level at which the mean free path of in band LWIR photons reaches “infinity” in the direction of space, and hence decrease the temperature and radiated power a bit.
Now there is considerable uncertainty in the strength of this effect. We expect it to be logarithmic in CO2 concentration, but computing the multiplier of the log a priori precisely is impossible, all the moreso when one considers that even expressing it as a single term means one is averaging over a staggering array of “stuff” — variations in the ALR, the effects of mountains, variations in land surface, the ocean, ice — all of this shifts the balance around to the point where at the poles one can see the ALR inverted with the air warmer than the surface as one goes up, to where latent heat from the oceans and radiation from water vapor cooling to form high albedo clouds is added to the “simple” mental picture of radiated heat entering a turbulent conveyor belt to be given off at height. And there are more greenhouse gases, and CO2 isn’t even the most important one (water vapor is, but a substantial amount). So its marginal contribution, and feedbacks, and mixes of natural variation, are all extremely difficult to compute or even estimate, and there is little reason to give too much credence to any attempts to do so, as when I say “extremely difficult” I mean that it is a Grand Challenge Problem, one that we simply cannot solve and that nobody can see a way clear TO solve with our existing theories and resources. We don’t even have the data needed to drive a solution if we could solve it, or to check a solution accurately if a magician tapped a computer with a magic wand and started any of the existing computations with the absolutely precise values for the starting grid.
Most computations of the CO2 only, no lag, no feedback warming are expressed as “total/equilibrium climate sensitivity”, the amount of warming expected if CO2 is doubled (logarithm, remember). The number obtained is around 1 C, which is not catastrophic according to anybody’s sane estimation, even the IPCC’s. They only extrapolate to “catastrophe” by adding positive feedback warming, mostly from water vapor.
The trouble is, nobody really understands the water cycle. We are still learning about it. Furthermore, the granularity of the GCM is so large that it cannot resolve things like thunderstorms that transport large amounts of energy vertically very rapidly (think about the temperature drop that occurs with a thunderstorm!) There are many other places where parameters are simply guessed at, or fit to reproduce the warming in the 1980s and 1990s. The problem with a guess is obvious (and these parameters are often set differently in different GCMs, as a clear indication of our uncertainty as to their true values). The problem with fitting the 1980s and 1990s is that this is the only 20 year stretch of rapid warming that occurred in the last 75 years!
Now, I’m a wee bit of a statistician and predictive modeler — I’ve founded two companies that did commercial predictive modeling using tools that I wrote (learning in the process that building excellent models, difficult as it is, is still easier than building a successful business, but that’s another story:-). I cannot begin to point out the problems with doing this. It is literally a childish mistake, and the IPCC is paying the price for it as their models diverge from reality. Of course they are. They were built by complete idiots who already “knew” the answer that the models were going to give them and didn’t hesitate to fit the one stretch of the data that described that answer.
At this point, however, the whole thing has passed beyond the realm of scientific research. The climate science community has bet the trust and reputation of scientists in all disciplines in the shoddy presentations of the assessment reports, specifically the summary to policy makers where assertions of “confidence” have been made that are utterly without foundation in the theory of statistics. They have not infrequently been made either in contradiction to the actual text in th WG chapters or over the objections of contributing authors in those groups, minimizing uncertainty and emphasizing doom. As a consequence, the entire world economy has been bent towards the elimination of the abundant natural energy resource that has literally built civilization, at a horrific cost. But now entire political careers are at risk. The objectivity of an entire scientific discipline is at risk of being called into question and exposed to a scrutiny that it surely will not survive if their expensive predictions turn out to be not only false, but in some sense passively falsified if only as an accessory after the fact.
We already know enough to reject the results of most of the GCMs as being anything like a good representation of the Earth’s climate trajectory. They fail on almost every objective measure. They are not suitable for the purpose they have been put to, and the ones that are working best are not even given an increased weight in what has been presented to policy makers, as the purveyors of the computations desperately hope that a warming burst will catch everything up to their predictions. A measure of the gravity of the problem can be seen in the recent “pause busting” adjustments to ocean data that have (unsurprisingly) tried to maintain the illusion of a rapidly warming Earth for just a bit longer to try to give the models more time to be right, and to give the massively invested political and economic parties involved one last chance to make their draconian measures into some sort of international law and guarantee wealth and prosperity for the people on that bandwagon for another decade or more.
But nature doesn’t care. I don’t know if the models are right or wrong — nobody does. What we do know is that they aren’t working so far, and that we aren’t using the ones that are working best so far and throwing away the ones that aren’t working at all. Maybe in a decade, warming will be back “on track” to a 4x CO2-only warming high feedback catastrophe. Maybe in a decade, the Earth will be cooling in the teeth of increasing CO2 because solar hypotheses or other neglected physics turns out to be right , or just because the no-feedback climate sensitivity turns out to be less than 1.C and net feedbacks turn out to be negative, so that a round of natural cooling from any incomputable chaotic cause is enough to overwhelm the weak CO2 driven warming. None of these is positively ruled out by the pitiful data we have in hand, and most of these possibilities are within the parametric reach of GCMs if they were retuned to fit all 75 years of post-1950 data accurately, and not just the 20 years of rapid warming.
And then, three different reputable groups have claimed that they have solved the fusion problem and will build working prototypes within the next 1-5 years, in some cases large scale working prototypes. I know of one other group that is funded on a similar claim, and I’m willing to bet that there are a half dozen more out there as there are literally garage-scale approaches that can now be tried as we have apparently mastered the physics of plasmas, which turned out to be nearly a Grand Challenge Problem all by itself.
If they succeed, it will be the greatest achievement of the human race, so far. It will be the true tipping point, the separator of the age, the beginning of the next Yuga. Fusion energy is literally inexhaustible within any conceivable timeframe that the human species will surfive in. It will transition civilization itself from its current precarious position relying on burning stuff for energy, stuff that we will need unburned later where I’m talking later for the next thousands of years, not in the next decade. It is silly to burn oil and coal as it is valuable feedstock for many industrial processes — it is just that we have no choice (yet) as no alternatives but perhaps nuclear fission are up to the task of providing an uninterruptable energy base for civilization.
Won’t the catastrophic global warming enthusiasts feel silly if they do? Within a decade, assuming that the Bern model is correct CO2 will start to aggressively level and will likely asymptotically approach a value around 450 to 500 ppm at most before slowly starting to recede. Even pessimistic models can’t force more than half a degree C, 1 F more warming out of that, which is the kind of “climate change” one can experience by driving fifty miles in any direction. We will have spent hundreds of billions if not trillions of dollars, enriched a host of “entrepreneurs” selling tax-subsidized alternative energy systems that will at that point have a shelf life of zero as conventional energy prices plummet, and condemned tens to hundreds of millions of the world’s poorest people to death and misery by artificially inflating the price of energy and delaying its delivery to the third world cultures, preserving their quaint 17th to 19th century lifestyles for a few more decades.
rgb

Hugs
Reply to  Bernard Lodge
October 9, 2015 12:26 pm

I applaud for professor Brown’s comment above.
If only there were more people like rgbatduke, the climate debate would make some sense.
And rgbatduke, don’t think nobody learns anything. You actually explained things that I’d been wondering even today related the physics of CO2 in stratosphere. That is a pretty extraordinary thing. You really did something that mainstream media hasn’t and would never do: explain in technical terms how GHG works.
I’d like to return to your comment later on, but I’m afraid it is buried here in the n’th thread in a random blog entry.

Reply to  Bernard Lodge
October 9, 2015 5:26 pm

rgbatduke writes

the ALR maintains the same gradient of temperatures in the atmosphere to the surface

No it doesn’t and you even mentioned it yourself. It varies with the GHG, water vapour. So if increased CO2 in the atmosphere resulted in a very slightly decreased lapse rate (because we already know that the moist lapse rate is lower than the dry lapse rate) then its entirely possible that the ERL may be at a greater altitude but at the same temperature and therefore there is no need for the surface to warm.
My intuitive belief is that the ERL will increase a bit (as you say) and the lapse rate will decrease a bit so that the new ERL is a little warmer than it started out and therefore radiates more energy. As well as the surface warming a little bit so the non-GHG captured radiation increases a little bit. And clouds will reflect a little bit more with the slightly increased water vapour in the atmosphere. And the water cycle will increase a little to help support the first point I made. And storms (as per Willis’ pet love) will do the same.
And all these feedbacks will be negative on the CO2 forcing. Or put simply The IPCC is wrong with their “highly likely” prediction that feedbacks are positive and will result in a forcing greater than the CO2 forcing alone.
I’ll even go a bit further with this and say the IPCC’s prediction is based on “recent measurements” where we have overshot the equilibrium and that has been seen by them as an undershot with more warming in the pipeline.

October 8, 2015 9:38 am

I had no idea that that there was anything like a 14 w%m^2 unknown variance in insolation measurements . That’s 3c at these intensities .

Trebla
October 8, 2015 9:41 am

“However, as we discussed above, this model (Model 4[d]) suggests that changes in atmospheric carbon dioxide are responsible for a warming of at most ~0.12°C [out of a total of 0.85°C] over the 1880-2012 period”. 0.12 degrees out of 0.85 degrees, or 14%. This compares nicely with the conclusion reached in Mike Jonas’ essay (The Mathematics of CO2) where he found that when he used the IPCC models to hindcast temperatures in the geological past, only 12% of global warming could be attributed to CO2.

Editor
Reply to  Trebla
October 8, 2015 2:45 pm

🙂

Latitude
October 8, 2015 9:47 am

There are many problems with the surface air temperature measurements over long periods of time. Rural stations may become urban, equipment or enclosures may be moved or changed, etc.
====
and every time they enter the most recent set of numbers…..the past is automatically adjusted
You can never do real science when your data change before you finish…
eye off ball alert……seems that the science has changed too
Now CO2 is the only driver…
….originally it was the small increase in temp from CO2 would create runaway global humidity
the humidity is what would cause global warming..

Dawtgtomis
Reply to  Latitude
October 8, 2015 11:04 am

IIRC, I think Gore was the first guy to claim permadrought in the breadbasket and increasing deserts as part of his movie debut scare package.

October 8, 2015 9:48 am

Combining this with an average TSI of 1361 Watts/m2 means that 1/1361 is 0.0735% and 0.0735% of 284 is 0.209°C per Watt/m2. Pretty cool, but this does not prove that TSI dominates climate.
Pretty cool? Seems to me you’ve applied a linear calculation to a relationship that is known not to be linear (Stefan Boltzmann Law) and come up with a number that seems significant only by pure coincidence. If you can supply a logical explanation for this math in context of known physics (SB Law) I’d be interested to hear it.

Reply to  davidmhoffer
October 8, 2015 9:56 am

applied a linear calculation to a relationship that is known not to be linear
When changes are small everything is linear. But the percentage change should be divided by four:
dT% = d(TSI)/4%

Reply to  lsvalgaard
October 8, 2015 10:20 am

Going from 0 to 1361 isn’t “small”, it is the whole range, and my thought was calculating over that range to come up with a number that applies to a small change at one end of the range doesn’t make sense to me.
AND it needs to be divided by 4, thanks for adding that.

Reply to  davidmhoffer
October 8, 2015 10:31 am

Going from 0 to 1361 isn’t “small”
But the variation of TSI is not from zero to 1361, but from 1361 to 1362. The derivation goes like this:
TSI = S = a T^4, dS = 4*aT^3 dT = 4 S/T dT, or dS/S = 4 dT/T, or dT/T = dS/S / 4 or dT% = dS%/4

Reply to  lsvalgaard
October 8, 2015 1:00 pm

But the variation of TSI is not from zero to 1361, but from 1361 to 1362.
Well of course. But if (for sake of argument) TSI was 650, and the variation was from 650 to 651, the change would still be small, still be roughly linear for practical purposes, but a completely different dT for the same value of dS (1). Would it not?

Reply to  davidmhoffer
October 8, 2015 1:56 pm

Since S is only half, dS/S% would be twice the 0.0735%, i.e. 0.15% and dT would be 0.15/4= 0.0384% of 288K = 0.11K

stevek
Reply to  lsvalgaard
October 8, 2015 2:35 pm

Well not everything. But differientiable functions yes.

Hugs
Reply to  lsvalgaard
October 9, 2015 12:53 pm

Well not everything. But differentiable functions yes.

Nice to see there are nerds around to fix obvious logical errors. 🙂
I’d like so much to go on flamebaiting how I have a little bit of trouble believing in א0, let alone א1. Like, you know, every layman like me is so intelligent that can toss 50% of known mathematics just by asserting how stupid it is. And besides, Brouwer was right, Hilbert wrong. /end stupid assertions
(well in fact I think Brouwer was weird – as weird as Hilbert or weirder – I’m not sure who in philosophy of mathematics I could agree with, but surely there must be some).

DD More
Reply to  davidmhoffer
October 8, 2015 2:29 pm

But at what wavelength and what effect do they have.
Solar irradiance variations show a strong wavelength dependence. Whereas the total (integrated over all wavelengths) solar irradiance (TSI) changes by about 0.1% over the course of the solar cycle, the irradiance in the UV part of the solar spectrum varies by up to 10% in the 150–300 nm range and by more than 50% at shorter wavelengths, including the Ly-a emission line near 121.6 nm [e.g., Floyd et al., 2003a]. On the whole, more than 60% of the TSI variations over the solar cycle are produced at wavelengths below 400 nm [Krivova et al., 2006; cf. Harder et al., 2009]
These variations may have a significant impact on the Earth’s climate system. Ly-a, the strongest line in the solar UV spectrum, which is formed in the transition region and the chromosphere, takes an active part in governing the chemistry of the Earth’s upper stratosphere and mesosphere, for example, by ionizing nitric oxide, which affects the electron density distribution, or by stimulating dissociation of water vapor and producing chemically active HO(x) that destroy ozone [e.g., Frederick, 1977; Brasseur and Simon, 1981; Huang and Brasseur, 1993; Fleming et al., 1995; Egorova et al., 2004; Langematz et al., 2005a]. Also, radiation in the Herzberg oxygen continuum (200 – 240 nm) and the Schumann-Runge bands of oxygen
(180–200 nm) is important for photochemical ozone production [e.g., Haigh, 1994, 2007, accessed March
2009; Egorova et al., 2004; Langematz et al., 2005b; Rozanov et al., 2006; Austin et al., 2008]. UV radiation in the wavelength range 200–350 nm, i.e., around the Herzberg oxygen continuum and the Hartley-Huggins ozone bands, is the main heat source in the stratosphere and mesosphere [Haigh, 1999,
2007; Rozanov et al., 2004, 2006].

Considering UV will heat water to “meters’ of depth and gets absorbed by ozone (having to later be radiated out of the system), the 60% change is more of a driver than just TSI.

Reply to  DD More
October 8, 2015 3:29 pm

You are confused between absolute and relative changes. The loose change in my pocket varies enormously from hour to hour, while my savings account does not. The variation of my loose change is not a good indicator of the variation of my total assets.

Reply to  DD More
October 8, 2015 2:42 pm

more than 60% of the TSI variations over the solar cycle are produced at wavelengths below 400 nm
This is a common misconception. The high variability of the UV is a relative variability, but since the part of TSI form UV is small, the variation in terms of energy is actually very small:
If you consider that the TSI varies by 0.1% you can infer that the spectral band short of 400 nm, about 10% of TSI, cannot vary by more that 1% – or it would account for the entirety of the TSI variation. Likewise the band short of 300 nm, about 1% of TSI, cannot vary more than 10%, etc.

Reply to  DD More
October 8, 2015 11:30 pm

Playing Devil’s advocate…
I know that the Maximum Usable Frequency (MUF) for ionospheric propagation via the F2 layer can vary by a factor of two between a sunspot minimum and a sunspot maximum. Whether or not the enough of the EUV that ionizes the F, E and D layers makes it down to where it can affect climate is a bit of an open question. An equally open question is the mechanism that EUV affects the climate, assuming that it does indeed affect the climate.
BTW, I did download your paper on reconstructing the SSN’s and made one pass through it and planning to make at least another pass before sending off comments.

richard verney
Reply to  DD More
October 9, 2015 1:16 am

In my view this is the money observation. “Considering UV will heat water to “meters’ of depth and gets absorbed by ozone (having to later be radiated out of the system), the 60% change is more of a driver than just TSI.” I have frequently made this point.
It is not necessarily a simple matter as to the extent that TSI as a whole varies, but also a matter as to how TSI varies throughout its wavelength since the absorption of energy by the oceans is dependent upon the wavelength of the ER. subtle variations in the distribution of wavelengths could be significant.
In a 3 dimensional world, a watt is not necessarily just a watt. May be not all watts are born equal since precisely where a watt appears in a 3 dimensional world could be very important.
The bulk of the Earth is oceans and they have orders of magnitude more heat capacity than the atmosphere so considering the behaviour of the oceans is paramount. The energy that the oceans radiate is from the surface, but the energy absorbed by the oceans is not at the surface. The oceans absorb energy right from surface down to a 100 metres (or even more), albeit that the bulk of the energy is absorbed within the 50 cm to 5 metre range. The K & T energy budget cartoon assumes that all the incoming energy (including that back-radiated) is absorbed at the surface but that is not so as far as the oceans are concerned (and DWLWIR cannot effectively be absorbed since given the omnidirectional nature of this it cannot penetrate more than about 4 microns and no one seems to put forward a mechanism how the energy absorbed in 4 microns can be sequestered and dissipated to depth at a rate faster than the energy in that small layer would simply drive evaporation)
But how long does it take energy that is absorbed at 5 metres to resurface? We do not know. What if that energy is not being absorbed at 5 metres but due to changes in wavelength is being absorbed at 6 metres? How long would it then take for energy absorbed at 6 metres to resurface? We do not know. We do not know whether such subtleties could have an impact upon multidecadal temperature trends.
Personally, I consider it more complex than just TSI.

Gary H
October 8, 2015 9:49 am

“This is Figure 27 from SCC15. ”
Where oh where is Figure 27 hiding?
Thanks

Gary H
Reply to  Gary H
October 8, 2015 9:58 am

Never mind – sorry – it must be me. . . finally it loaded . . refreshed . . and it didn’t load. Rebooted – all is fine. Weird.

Peter Sable
October 8, 2015 9:50 am

There’s a very nice long back and forth between one of the authors and Willis starting at this comment:
http://wattsupwiththat.com/2015/09/22/23-new-papers/#comment-2040808
(also note raw data link is in that comment, unlike above)
My high level summary of this discussion is that Willis claims about the SCC paper:
(1) estimated TSI based on sunspot numbers is wrong if you didn’t take into account sunspot counting methodology changes done in 2014 (SCC paper didn’t)
(2) it’s too easy to create a spurious correlation by tweaking both the TSI AND the temperature record. You often end up getting what you are looking for, even if you didn’t think you are trying to. The authors counterclaim they were just messing with the temperature records first (“a priori”)
(3) autocorrelation, Bonferroni, or Monte Carlo were not considered. These three factors and (2) dramatically change the claim of significance.
There’s a pretty good back and forth, if you want the details go read it. Not sure it will get repeated here or not.
My take is that if you can’t make a good case with a “standard” temperature history estimate and some other standard TSI history estimate, then you are well within the measurement error limits of the estimates and you are just correlating noise. Both the estimated temperature history and estimated TSI history have a large quantity of noise, there’s simply no real signal to correlate to.
Also I don’t buy the “a priori” order of development argument. Human brains are instinctive pattern matchers, you could have just seen something out of the corner of your eye and you just cherry picked without realizing it. That’s why we have double blind studies.
The same argument applies to the warmists of course. Except that there’s not nearly as much noise in the C02 record.. (which is irrelevant, there’s plenty in the temperature record, and anyways correlation is not causation…. but I digress)
Peter

Editor
Reply to  Peter Sable
October 8, 2015 6:28 pm

Peter, Willis and Ronan are getting a bit in the weeds. The data, relative to the error in measurement, is not adequate to explain a 0.85 degree shift in any context or over any period of time. The “standard” temperature reconstruction (GHCN) is no better supported on a worldwide basis than Ronan’s. I liked the paper because it showed a very plausible alternative to AR5 that could not be refuted, except by arguments that also refute AR5.

Peter Sable
Reply to  Andy May
October 8, 2015 11:10 pm

. I liked the paper because it showed a very plausible alternative to AR5 that could not be refuted, except by arguments that also refute AR5.

Well they should have spent more time refuting their own hypothesis and then using the same techniques to refute AR5.
Now THAT would be an interesting paper. Propose 4-5 reasonable sounding hypothesis, refute them with sound statistics and signal processing techniques, and then go back and apply those same techniques to AR5 as the finishing touch. Show that there’s no useful signal in any of the data*
Peter
* Other than ENSO. I believe it’s been shown that there’s a definitive, statistically sound ENSO signal in ESST

Reply to  Peter Sable
October 9, 2015 5:23 am

Hi Peter,
I was busy with work on Tues and Wed and didn’t get a chance to reply to Willis’ last comment until yesterday morning, but when I went to post my response (which addresses most of the issues you mention), I found comments were closed! This was presumably because that thread is now several weeks old.
In case you (or any of the others) are interested, I will repost my response to Willis below. But, it is quite a long response (with a lot of links – meaning it’ll probably go into moderation), and is quite specific to the discussion Willis and I were having on that thread. So, maybe one of the mods could move it to the original “23! New! Papers!” thread?
P.S. Yes, Feynman’s approach to science is definitely an influence on mine!

Reply to  Ronan Connolly
October 9, 2015 5:24 am

Mods: As mentioned above, this was a reply to Willis’s Tuesday comment, that I had meant to post on this thread (http://wattsupwiththat.com/2015/09/22/23-new-papers/#comment-2040808) yesterday, but the comments had been closed. Is it possible to move my reply there?
[no, it isn’t, there is no “move” feature in wordpress -mod]
——
Hi Willis,

Ronan, you claimed you didn’t use them because inter alia they “implied slightly different trends”. Therefore, you are looking at one of your proposed new reconstruction’s OUTCOMES (the trend of the the reconstruction) to determine which stations to include.
And that is indeed data snooping. You are looking, not at the metadata for the station (e.g. rural, length of record, etc.) to choose your stations. You are looking at the trend of the station data in order to decide whether to use it … sorry, but you can’t do that.

Ah, no, I wasn’t claiming that, but I can see how you might have thought that! Sorry for the misunderstanding! It probably would help to read the Urbanization bias III paper where we carried out that analysis. When we found that only 8 stations met our ex ante conditions, we concluded that this was simply too small a sample size to construct a “global” (or even “hemispheric”) temperature reconstruction. So, that was the end of our attempt – the sample size meeting our ex ante conditions was too small.
…However, we then speculated that maybe some might argue, “well, why not use them anyway, if that’s all you’ve got?”. So, we decided to carry out a brief assessment of those stations, to see if there were any useful information we could extract from them. The 8 station records are shown below:
http://s2.postimg.org/5lnpfu3m1/8_rural.jpg
We concluded that, not only was the sample size too small for constructing a “global temperature” reconstruction, but the lack of consistency of the trends between all 8 stations and the possibility of non-climatic biases (non urban-related!) meant that, without obtaining further information on the individual stations (e.g., detailed station histories), it was difficult to determine which trends in which stations were genuinely climatic.
But, in any case, our proposed reconstruction had already failed and been abandoned before we “looked at the trends” due to sample size issues.
So, from our 2014 analysis we concluded that without further information the data is probably too problematic for constructing a reliable long-term “global temperature estimate” by simply “throwing everything into the pot”. Of course, you could do it anyway – that’s essentially how the ‘standard temperature datasets’ are constructed, after all. But, we concluded that these simplistic analyses are significantly affected by urbanization bias. When your data is problematic, going with the “simple solution” isn’t necessarily a good plan:

There is always an easy solution to every human problem – neat, plausible and wrong – Henry Louis Mencken, 1917

By the way, if anyone is interested in reading our 2014 Urbanization bias papers, below are the links.
“Urbanization bias I. Is it a negligible problem for global temperature estimates?”
Paper: http://oprj.net/articles/climate-science/28
Data: http://dx.doi.org/10.6084/m9.figshare.1005090
“Urbanization bias II. An assessment of the NASA GISS urbanization adjustment method”
Paper: http://oprj.net/articles/climate-science/31
Data: http://dx.doi.org/10.6084/m9.figshare.977967
“Urbanization bias III. Estimating the extent of bias in the Historical Climatology Network datasets”
Paper: http://oprj.net/articles/climate-science/34
Data: http://dx.doi.org/10.6084/m9.figshare.1004125
Our analysis of the Surfacestations results might also be of relevance.
“Has Poor Station Quality Biased U.S. Temperature Trend Estimates?”
Paper: http://oprj.net/articles/climate-science/11
Data: http://dx.doi.org/10.6084/m9.figshare.1004025
And for anybody who is interested in our new Earth-Science Reviews paper we’ve been discussing here, there’s a pre-print here: http://globalwarmingsolved.com/data_files/SCC2015_preprint.pdf and a copy of the SI here: http://globalwarmingsolved.com/data_files/SCC2015-SI.zip.
Which brings me back to this new paper.
We had concluded from our 2014 analysis that the data is probably too problematic for a simple global temperature reconstruction. However, because the average climatic trends within a given region tend to be fairly similar (e.g., see rural Ireland stations below), for this new paper, we suggested that maybe, by focusing on individual regions and carefully assessing the data for some of the regions with the highest densities of rural records, we might be able to achieve the less ambitious goal of constructing some new regional temperature reconstructions.
http://s13.postimg.org/3qkdu9ron/Fig_16_Ireland_trends.jpg
We already had a rural U.S. regional estimate (although we updated it for this paper), since this region has a high density of fully rural stations with fairly long and complete records (thanks to the COOP project and the construction of the USHCN dataset). As discussed above, there are no other regions which come close to matching this…
However, looking at the distribution of fully rural stations two regions with a relatively high density are China and the Arctic – these also are regions that Willie has done some research on. So, we decided to try and see what we could do for those two regions.
As we discuss in the paper, we had to take a number of assumptions with both of these regions, and we stressed that they need to be treated cautiously. We made several recommendations on how they could be improved, e.g., compilation of station histories, updating those station records which stop in 1990 in the GHCN dataset, but which are still in operation.
We also were able to obtain station histories and parallel measurements for one of the 8 fully rural stations identified in our 2014 analysis, i.e., Valentia Observatory, and we were able to use this to correct for non-climatic biases in the station record, giving us a handle on rural Irish trends.
Initially, when we started working on these new regional estimates, we were just considering writing a couple of short letters with Willie. But, when we had compiled those four regional estimates, we noticed that they were actually quite well distributed throughout the Northern Hemisphere:
http://s23.postimg.org/yypdlhpzv/Fig_19_map_NH_composite.jpg
We also noted that with these 4 regions we were already using nearly half of the fully rural GHCN stations in the Northern Hemisphere (and nearly 2/3 of those with data for the early 20th century).
For these reasons, we suggested that if we used an area-weighted mean of these four regional estimates, this could provide us with a new mostly-rural Northern Hemisphere temperature reconstruction.
It’s not that we set out saying, “the way to construct a Northern Hemisphere reconstruction is to pick ‘our favourite regions’ and stitch them together” (though in hindsight, I can see why you got that impression!). Rather it was that we had tried to create as many regional estimates as we felt we could from the data (which was four), and then realised, “hey, we can use these to construct a mostly-rural Northern Hemisphere estimate!”.
We certainly agree that, with more work, and ideally the collection of station histories and/or other station information, there are probably other regions in the Northern Hemisphere that could be constructed. In our paper, we suggest that tracking down station histories for some of the European fully rural stations could be a good place to start. We also suggest that obtaining similar regional estimates for the tropics should be a high priority.
… However, when you look at the data that is actually available, it is surprisingly limited. For fully rural station, most of the records are either short (only covering the 1951-1990 period) or else have several huge unexplained gaps in them. Moreover, they tend to be fairly isolated geographically, meaning that there aren’t nearby fully rural stations to compare trends with, making it even harder to identify non-climatic trends.
For instance, here are all the fully rural stations for India. There are 8 – the white diamonds on the map, i.e., the ones labelled “Still rural” (this map was taken from our Urbanization bias II paper and was considering a separate issue)
http://s3.postimg.org/v9evrxm2b/India_stations.jpg
http://s24.postimg.org/t2j32ydet/India_rural_trends.jpg
We are of the opinion that – without compiling further information on these stations and their records (station histories, etc.) there is really not much that we can say about rural Indian temperature trends. What about you?
By the way, I should stress that when I say the records don’t seem suitable, I’m not commenting on whether the trends are “right” or “wrong” – that would be data snooping, as you correctly point out! It’s just that we don’t feel we are in a position to judge which parts of the various Indian fully rural records are “climatic” and which are “non-climatic”.

So I fear that the distinction between the uses of “statistically significant” and “significant” is nowhere near as clear as you seem to think. In your shoes, I’d restrict the use of the claim that something is “significant” to mean “statistically significant”, and use some other word when you don’t mean that. Otherwise, we have no clue what you mean.

I appreciate there is the potential for some confusion over the use of the term “significant”.
However, when the term “significant” is being used to describe an amount, this is not quite the same as describing a result as “significant”.
For instance, if I say, “there is a significant amount of money on the table”, this is different from saying, “there is a significant chance that there is money on the table”.
When describing the latter, the main priority would usually be establishing what the chance that there is money on the table is, e.g., by applying appropriate statistical tests. However, for the former, it is already known that there is an amount of money. The main priority would usually be in quantifying exactly how much money there is, e.g., by counting it.
In the former case, the term “significant” is an indication that it is not a trivial amount, but doesn’t specify exactly how much.
In our 2014 papers we established that urbanization bias has introduced a “significant” warming bias into current global temperature estimates, i.e., it is not a small or negligible problem (as several groups had previously argued). However, as we discussed in those papers, quantifying the exact magnitude of this bias is surprisingly challenging.
So, in that context, the use of the word “significant” indicates that the magnitude of the effect is not exactly known, but is not trivial. It is not being used to indicate whether or not there is an effect – there is!
Specifically, in Paper 1 we showed that urbanization bias is not a “negligible”/trivial problem for global temperature estimates (as several studies had claimed). In Paper 2 and 3, we showed that the current automated homogenization algorithms used for “removing” urbanization bias don’t work correctly, and often introduce as much bias as they remove! In Paper 3, we also showed that, in the US, urbanization bias had introduced a warming bias of ~0.5°C/century into the fully urban stations (10% of stations) relative to the fully rural stations (22% of stations), and that for the fully rural stations temperatures in the 1930s/1940s were roughly comparable to present:
http://s11.postimg.org/3kiuuksgz/US_TOBS_UHI.jpg
We also showed that the majority of stations with reasonably long and complete records are not fully rural, but are urbanized to some extent:
http://s7.postimg.org/uay0chswr/x_GHCN_urban_ratio.jpg
Because the 20th century was accompanied by a dramatic increase in urbanization, this means that relying on urbanized stations for studying long-term (e.g., greater than 50 years) temperature trends will introduce urbanization bias into your estimates.
http://s8.postimg.org/a8w35i185/schematic_UHI.jpg
The main problems in quantifying the exact magnitude of urbanization bias in the standard estimates is that there are very few cases where apples vs. apples comparisons can be made. Because rural stations are by their very nature quite far from civilisation, they are much harder to staff and maintain over a long, continuous period than urban stations. With the invention of automated weather stations, this is no longer as big a problem, but these only started being installed in the late 20th century.
In this new paper, we were able to provide another apples vs. apples comparison for China during the 1951-1990 period as there are more than 400 GHCN stations during this period, and they are split reasonably evenly between fully rural, intermediate and fully urban:
http://s28.postimg.org/6fglzr0l9/China_map_all_stations.png
http://s8.postimg.org/moafga111/China_UHI_study.png
However, it is still not clear exactly how much urbanization bias has affected the standard estimates. In Section 4.1, we compared a simple gridded average of all Northern Hemisphere GHCN stations (either homogenized or unhomogenized) with all the standard Northern Hemisphere, land-only estimates, and as you had noted earlier (and has been widely noted), they all imply basically the same trends:
http://s18.postimg.org/yzg0za26h/Fig_20_other_NH_estimates.jpg
When we compared our new mostly-rural Northern Hemisphere land-only reconstruction to the red curve in the above graph, we obtained the following:
http://s11.postimg.org/kdj63v0v7/Fig_21_comparison_of_ours_vs_all_GHCN.jpg
Note that in your original comment on our paper your comparison seems to have been with HadCRUT4 (global?). Our new reconstruction is for Northern Hemisphere land surface air temperatures, so a more appropriate comparison would have been with one of the Northern Hemisphere land-only estimates, e.g., CRUTEM4 (N. Hem.)
It is possible that most of the differences between these two estimates are “urbanization bias” since our new estimate should be relatively unaffected by urbanization bias, but there are other factors contributing to the differences, e.g., the extra warming introduced by NOAA’s automated homogenization, the poor station coverage in the tropics of our reconstruction. So, I used the term “significant urbanization bias” – as in non-trivial amount, as I do not think we have determined the exact amount.
In some of the other examples you quoted, the term would have been used in a similar context, e.g.,

I assumed that when you say something like
A close inspection of the individual satellite measurements in Figure 2 reveals that there are subtle, but significant differences in the trends between cycles.
that you have done more than squinted at them from across the room …

Here’s Figure 2:
http://s10.postimg.org/extg5whop/Fig_02_satellite_era_raw.jpg
The full paragraph you quoted from was,
With this in mind, some more limitations of the current data are apparent from Fig. 2. A close inspection of the individual satellite measurements in Fig. 2 reveals that there are subtle, but significant differences in the trends between cycles. For instance, while the ACRIM3 and TIM satellites both are currently reporting similar values for Total Solar Irradiance, when TIM was first launched, it was recording a lower Total Solar Irradiance than ACRIM3. When we combine this with the fact that each of the satellites implies a different absolute Total Solar Irradiance value it is unclear exactly how to stitch the different satellite results together.

The point we were making here was to simply draw the reader’s attention to the fact that each of the satellites implies slightly different trends and the final composite you get will depend on which datasets you use. In other words, there is considerable subjectivity in how the TSI composites are constructed. We felt this was an important point as many people who are unfamiliar with how the TSI composites are constructed typically assume it’s a straightforward process and that it’s just a matter of putting the satellite up there and pressing “record”!
We felt that a mere visual inspection of the ACRIM3 vs. TIM example would be sufficient for most readers to illustrate that point. Again, in this context, “subtle, but significant” means the differences are “non-trivial” and this has significance for the construction of a TSI composite, and for users of those composites.

And when you say:
In an earlier study, Chapman et al. (2001) found the facular-to- spot area ratio varied significantly over their period of analysis with the ratio increasing during cycle 22 and decreasing in the first part of cycle 23.
I assume that Chapman has done more than squinted at the graphs.
And I assume the same for the following:
Livingston et al. (2012) suggested that the reason for this change in the relationship could be that the average magnetic field strength had significantly decreased during Solar Cycle 23 (Penn & Livingston, 2006).
When I read that, I assume that they have tested the change in field strength and found it to be significant.

Finally, consider this claim:
When Scafetta & Willson updated the Hoyt & Schatten reconstruction using the ACRIM composite (Scafetta & Willson, 2014 – see their Appendix B), they noted an apparently strong correlation with Manley’s Central England Temperature (CET) dataset (Manley, 1974), as updated by Parker & Horton (2005).
Again I say, you are merely parroting these claims of significance and “strong correlation” without checking them. I suggested that you re-examine Scafetta’s claim. Have you done so?

Section 2 was a literature review. We were summarising the key claims and findings of the main papers of relevance. I definitely agree that it could be useful to the community if researchers with a statistical background went back and re-examined many of these claims of significance and strong correlation. But, the purpose of our literature review was to let the reader know what is being said in the literature, rather than just reviewing the bits we “liked”… 😉
My own feeling is that many of the studies both purporting to have found and purporting to have disproved a strong solar influence on climate are based on poor statistical analysis…
As you point out in your October 5, 2015 at 4:01 pm comment,

Ronan, let me offer an illustration of how wrong standard statistical analysis is in the world of climate.

I agree with you. (And that is a very clear & illustrative example, by the way!)
Willis Eschenbach October 5, 2015 at 11:02 pm

This is particularly clear when you say:
Still, if the proposed link between cosmic rays and cloud cover transpires to be insignificant, this does not preclude the possibility of other mechanisms by which cosmic rays could significantly influence the climate.
Are we to assume that you plan to just look at the link and declare it “insignificant”? Because (perhaps wrongly) I assume that you would have TESTED the cosmic ray/cloud cover relationship to determine it is “insignificant”.

I think Willie did some studies looking at the cosmic ray/cloud cover theory at the time Svensmark first proposed it. But, in my own opinion, the alleged evidence in support of this theory seems at best ambiguous. It’s quite like your comments about studies looking for a strong relationship to the ~11 year solar cycle. If there is a link, it doesn’t seem to be an obvious one. Having read the literature and looked briefly at the data, it’s not a research avenue that seems to me like it would be particularly fruitful, especially given the limited data. So, at the moment, this is not a high research priority for me. But, apparently there are quite a few researchers who are very interested in it. So, we felt it was important to briefly summarise the ongoing debate.

Finally, take a look at your Figure 29, or your Figure 30 … where is the adjustment for autocorrelation? You keep saying you understand it … so why, oh why, don’t you USE IT?

We were writing this paper with a general scientific audience in mind, and therefore tried to minimise the use of more technical statistics for the benefit of those readers without much statistical background. In the above case, we felt that the more visually-based discussion (in terms of residuals) was already sufficient to establish that the the “upper bound” for a contribution from CO2 was already much smaller than had been previously assumed. Yes, of course, correcting for autocorrelation would reduce that upper bound even further! But, the visual analysis based on residuals already established that for Model 4 (i.e., Figure 29), CO2 could only explain at most ~0.12°C of the warming over the 1881-2014 period and implied a “climate sensitivity” to a doubling of CO2 of at most ~0.44°C. We felt this would already be quite a shocking result for many readers.
For the benefit of the others who haven’t looked at the paper yet, the two figures Willis is referring to are when we tried to:
(a) see if increases in CO2 (or GHG) could explain any of the residuals after we accounted for solar variability (Figure 29):
http://s28.postimg.org/qllj8anjx/Fig_29_TSI_residuals_fitting.jpg
(b) see if we could explain our new Northern Hemisphere reconstruction trends in terms of increases in CO2 (Figure 30):
http://s4.postimg.org/72hnal1il/Fig_30_CO2_fitting.jpg
Yes, as Willis points out, correcting for the autocorrelation problem would make the (already poor) fit of the residuals to CO2 concentrations even worse!
Compare this to the case where we only attempted to fit our reconstruction in terms of the updated Hoyt & Schatten TSI reconstruction:
http://s10.postimg.org/lwqi8f2x5/Fig_28_TSI_fitting.jpg
Below are our four attempts at fitting our Northern Hemisphere temperature reconstruction in terms of CO2 and solar (using Hoyt & Schatten TSI):
http://s27.postimg.org/cbqaqe7r7/Fig_31_model_comparison.jpg
In our opinion, the “TSI-only” model actually seems to explain much of the trends (residuals are relatively small). CO2 doesn’t seem to majorly improve the models, and as Willis points out, much of the apparent “fit” to CO2 could be just a consequence of autocorrelation.
But, since many people strongly believe that CO2 has played a major role on Northern Hemisphere temperature trends since the 19th century, we tried to estimate an upper bound for the potential role of CO2. This was already quite low as it was (e.g., compared to the IPCC reports), without correcting for autocorrelation!
Of course, it should be stressed that the four models above assume that the updated Hoyt & Schatten TSI reconstruction is an accurate reconstruction. If you read Section 2 of our paper, you’ll probably agree with us that this has not yet been satisfactorily resolved. However, as we discuss in Section 5.1, there is considerable justification for arguing that the Hoyt & Schatten TSI reconstruction is at least as plausible as the Wang et al., 2005 reconstruction that the IPCC relied on for their latest reports.

Editor
Reply to  Ronan Connolly
October 9, 2015 2:25 pm

+1, thanks Ronan.

richard verney
Reply to  Ronan Connolly
October 10, 2015 3:15 am

Ronan
There is a lot to consider, thanks for posting this. Hopefully, Willis will revert with some further comments, but perhaps he is no longer following this particular thread.
Recently (the last 2 or 3 months) there have been many interesting articles on temperature data sets, problems with measurement, corrections, models etc. Whilst this would involve a lot of work for the moderators, I consider that it would be useful if all of these were put in some reference folder (for ease of reference) such as the folders on sea ice. Perhaps, they could be a sub folder of the global temperature folder on the reference page.
It seems to me that your recent post could easily be lost amongst the noise of the vast amount of comments that recent articles have generated. Perhaps it should be re-submitted as a thread on its own so that it can be discussed and commented upon as warranted by the further detail that you have provided. Just a thought.
I need to think about it before commenting on the merits of the information that you have provided.

rishrac
October 8, 2015 9:56 am

What happens to the math that states ALL the warming is due to co2 alone? Isn’t that the reason for the alarm? A very high confidence level? What happens to all the explanations on the science of how all that heat is retained? Was that science or guess work? Now it’s only slightly more than half? So is that 0.25 C increase in temperature from co2 since 1950 or before that? Add in all the other factors, does CAGW have any idea what they are talking about?

Pete J.
October 8, 2015 9:59 am

A very persuasive summary that clearly shows the deficiencies in the “consensus” models. The differences will be even more apparent if all the graph ordinates were correlated so that the anomaly data intercept were all zeroed at 1950. That way it will be even more apparent precisely how hind casts compare with “projections.”

Peter Sable
October 8, 2015 10:10 am

This just turned into one of my favorite sendups on spurious correlation (or the lack thereof):
http://wattsupwiththat.com/2015/10/08/solar-update-october-2015/#comment-2044770

Hugs
Reply to  Antero Järvinen
October 9, 2015 1:16 pm

We believe that the greatest changes in temperature
are due to the change in the proportionality coefficient
G, i.e. the relation between cloudiness and water vapor
concentration. The temperature increase of the last
century can be explained almost completely by the
increase of solar activity and the decrease of cosmic ray
flux together 0.47 K [31], the destruction of rainforests
about 0.3 K, the increase of greenhouse gas
concentration about 0.1 K
and increase of aerosol about –
0.06 K [1]. The sum of these contributions is 0.81 K,
which is close to the observed temperature increase [34].
In the end, we conclude that maybe one reason for the
long history of life on our globe is the negative feedback
of the climate for the global temperature.

Well, Mr Järvinen has found a sufficiently sceptic presentation, though I assume the quote count has so far been quite low.

richard verney
Reply to  Antero Järvinen
October 9, 2015 1:19 pm

Interesting. appears well worth a read, as many have postured that small changes in cloudiness, patterns of cloudiness and/or humidity could explain the 20th century temperature, so to see a study on this is useful..

Rex Forcer
October 8, 2015 10:18 am

Having read the paper, I think there are a couple of errors in the article above.
The sensitivity to +400ppmv of CO2 is calculated as 0.44 deg C or 2.52 deg C (depending upon whether the starting assumption is that TSI or CO2 is the dominant force.
However, for a doubliing of CO2 from pre-industrial times (+280ppmv), the paper states that the result is either 0.31 or 1.76 deg C. It wold be helpful if you were to correct the article to include these numbers, not the numbers for +400ppmv.
The significance of this is that both the numbers are below the magic 2 deg C “safe limit” asserted by the IPCC. This renders the Paris summit irrelevant; we don’t need a global agreement; we can stop spending on silly, intermittent renewables and cut dramatically climate research budgets and allocate that money to more productive use.

John Whitman
October 8, 2015 10:42 am

In the lead post Andy May wrote,
“Conclusions
So, SCC15 suggests a maximum ECS for a doubling of CO2 is 0.44°C. They also suggest that of the 0.85°C warming since the late-19th Century only 0.12°C is due to anthropogenic effects, at least in the Northern Hemisphere where we have the best data. This is also a maximum anthropogenic effect since we are ignoring many other factors such as varying albedo (cloud cover, ice cover, etc.) and ocean heat transport cycles.
While the correlation between SCC15’s new temperature reconstruction and the Hoyt and Schatten TSI reconstruction is very good, the exact mechanism of how TSI variations affect the Earth’s climate is not known. SCC15 discusses two options, one is ocean circulation of heat and the other is the transport of heat between the Troposphere and the Stratosphere. Probably both of these mechanisms play some role in our changing climate.”

If one has an ‘a priori’ premise that burning fossil fuels causes significant and dangerous warming then this SCC15 paper is anathema.
Where is Naomi Oreskes screaming frantically that the very idea of a 0.44°C max ECS for a doubling of CO2 must be evilly constructed by shills for the fossil fuel (oil, coal, gas) companies?
John

John Whitman
Reply to  John Whitman
October 8, 2015 10:44 am

Here is the same post as above, but this time with properly executed block quote commands:
– – – – – – –

In the lead post Andy May wrote,
“Conclusions
So, SCC15 suggests a maximum ECS for a doubling of CO2 is 0.44°C. They also suggest that of the 0.85°C warming since the late-19th Century only 0.12°C is due to anthropogenic effects, at least in the Northern Hemisphere where we have the best data. This is also a maximum anthropogenic effect since we are ignoring many other factors such as varying albedo (cloud cover, ice cover, etc.) and ocean heat transport cycles.
While the correlation between SCC15’s new temperature reconstruction and the Hoyt and Schatten TSI reconstruction is very good, the exact mechanism of how TSI variations affect the Earth’s climate is not known. SCC15 discusses two options, one is ocean circulation of heat and the other is the transport of heat between the Troposphere and the Stratosphere. Probably both of these mechanisms play some role in our changing climate.”

If one has an ‘a priori’ premise that burning fossil fuels causes significant and dangerous warming then this SCC15 paper is anathema.
Where is Naomi Oreskes screaming frantically that the very idea of a 0.44°C max ECS for a doubling of CO2 must be evilly constructed by shills for the fossil fuel (oil, coal, gas) companies?
John

Marcus
Reply to  John Whitman
October 8, 2015 2:30 pm

I’m still waiting for my FF money…did you get yours yet ????

John Whitman
Reply to  John Whitman
October 8, 2015 2:38 pm

Marcus on October 8, 2015 at 2:30 pm
– – – – – –
Parody time . . . maybe the FF industry will send me some money that will arrive in black stealth helicopters at midnight on the new moon to avoid detection by Naomi Oreskes . . . end of parody time.
John

climatologist
October 8, 2015 10:45 am

The sad thing about global mean temperature is that over large parts of the Southern Hemisphere there were no, *no*, reliable observations till the just before the IGY. Consequently one cannot say anything about the global mean temperature till that date.

October 8, 2015 11:53 am

Naomi’s moronic theory works thusly:
IF
Evidence against AGW,
THEN
Evidence must be from evil anti-AGW companies.

JJB MKI
October 8, 2015 12:05 pm

Figure 27 is a bit of a blinder! I would like to have some more information on how TSI is modelled / reconstructed in SCC15 though. Excuse my ignorance of the methods, but if they rely on proxies that are themselves influenced by surface temperature then could we be looking at some circular reasoning here?

willhaas
October 8, 2015 12:13 pm

There is no real evidence that CO2 has any effect on climate. There is no such evidence in the paleoclimate record. There is evidence that warmer temperatures cause more CO2 to enter the atmosphere but there is no evidence that this additional CO2 causes any more warming. If additinal greenhouse gases caused additional warming then the primary culprit would have to be H2O which depends upon the warming of just the surfaces of bodies of water and not their volume but such is not part of the AGW conjecture.
The AGW theory is that adding CO2 to the atmosphere causes an increase in its radiant thermal insulation properties causing restrictions in heat flow which in turn cause warming at the Earth’s surface and the lower atmosphere. In itself the effect is small because we are talking about small changes in the CO2 content of the atmosphere and CO2 comprises only about .04% of dry atmosphere if it were only dry but that is not the case. Actually H2O which averages around 2% is the primary greenhouse gas. The AGW conjecture is that the warming causes more H2O to enter the atmosphere which further increases the radiant thermal insulation properties of the atmosphere and by so doing so amplifies the effect of CO2 on climate. At first this sounds very plausible. This is where the AGW conjecture ends but that is not all what must happen if CO2 actually causes any warming at all.
Besides being a greenhouse gas, H2O is also a primary coolant in the Earth’s atmosphere transferring heat energy from the Earth;s surface. which is mostly H2O, to where clouds form via the heat of vaporization. More heat energy is moved by H2O via phase change then by both convection and LWIR absorption band radiation combined. More H2O means that more heat energy gets moved which provides a negative feedback to any CO2 based warming that might occur. Then there is the issue of clouds. More H2O means more clouds. Clouds not only reflect incoming solar radiation but they radiate to space much more efficiently then the clear atmosphere they replace. Clouds provide another negative feedback. Then there is the issue of the upper atmosphere which cools rather than warms. The cooling reduces the amount of H2O up there which decreases any greenhouse gas effects that CO2 might have up there. In total H2O provides negative feedback’s which must be the case because negative feedback systems are inherently stable as has been the Earth’s climate for at least the past 500 million years, enough for life to evolve. We are here. The wet lapse rate being smaller then the dry lapse rate is further evidence of H2O’s cooling effects.
A real greenhouse does not stay warm because of the heat trapping effects of greenhouse gases. A real greenhouse stays warm because the glass reduces cooling by convection. This is a convective greenhouse effect. So too on Earth..The surface of the Earth is 33 degrees C warmer than it would be without an atmosphere because gravity limits cooling by convection. This convective greenhouse effect is observed on all planets in the solar system with thick atmospheres and it has nothing to do with the LWIR absorption properties of greenhouse gases. the convective greenhouse effect is calculated from first principals and it accounts for all 33 degrees C. There is no room for an additional radiant greenhouse effect. Our sister planet Venus with an atmosphere that is more than 90 times more massive then Earth’s and which is more than 96% CO2 shows no evidence of an additional radiant greenhouse effect. The high temperatures on the surface of Venus can all be explained by the planet’s proximity to the sun and its very dense atmosphere. The radiant greenhouse effect of the AGW conjecture has never been observed. If CO2 did affect climate then one would expect that the increase in CO2 over the past 30 years would have caused an increase in the natural lapse rate in the troposphere but that has not happened. Considering how the natural lapse rate has changed as a function of an increase in CO2, the climate sensitivity of CO2 must equal 0.0.
The AGW conjecture talks about CO2 absorbing IR photons and then re radiating them out in all directions. According to this, then CO2 does not retain any of the IR heat energy it absorbs so it cannot be heat trapping. What the AGW conjecture fails to mention is that typically between the time of absorption and radiation that same CO2 molecule, in the lower troposphere, undergoes roughly a billion physical interactions with other molecules, sharing heat related energy with interaction. Heat transfer by conduction and convection dominates over heat transfer by LWIR absorption band radiation in the troposphere which further renders CO2’s radiant greenhouse effect as a piece of fiction. Above the troposphere more CO2 enhances the efficiency of LWIR absorption band radiation to space so more CO2 must have a cooling effect.

Bernard Lodge
Reply to  willhaas
October 8, 2015 1:36 pm

Willhaas,
+1000
In your second paragraph, you state …
“The AGW theory is that adding CO2 to the atmosphere causes an increase in its radiant thermal insulation properties causing restrictions in heat flow which in turn cause warming at the Earth’s surface and the lower atmosphere.”
You and I both know that “radiant thermal insulation properties” is an oxymoron. If something is radiating, it is not insulating!
CO2 is a radiative gas that emits LWIR in all directions, including upwards into space. Forget the CAGW obsession with downward emissions for a moment – if you increase the atmospheric CO2, you obviously increase upward emissions into space – which are lost forever. This means, as you say in your final sentence, the planet as a whole is COOLED by increasing atmospheric CO2, not warmed!
It really is that simple.

willhaas
Reply to  Bernard Lodge
October 8, 2015 9:12 pm

CO2 is not an energy source and it s not a reflective agent so the only way it can affect climate is to act as some sort of insulator. A good absorber is also a good radiator so what ever LWIR absorption band radiation it absorbs it will radiate away if it can hold on to that energy long enough. The time between absorbing an LWIR absorption band IR photon and radiating that same photon of energy away is about .2 seconds for an undisturbed CO2 molecule. However in the lower troposphere that same CO2 molecule will be disturbed. In will interact with neighboring molecules on average of an order of a billion times, sharing energy with each interaction.. In the troposphere, conduction and convection dominate so much so that the LWIR absorption and radiation properties of CO2 is insignificant. Because of it’s heat capacity, adding CO2 actually increases the lapse rate which renders a cooling effect. And yes, in the stratosphere, where radiation dominates, adding CO2 causes the Earth to radiate more efficiently to space which is a cooling effect. In terms of the back radiation concept, the effect of CO2’s LWIR absorption band radiation is insignificant in the troposphere and according to the laws of thermodynamics, a cooler object cannot cause a warmer object to increase in temperature. At best it can only effect the rate of cooling but the effect of the atmosphere on cooling is dominated by heat transfer by conduction, convection, and phase change by H2O via the heat of vaporization. The AGW conjecture is based on the idea that energy moves off the surface of the Earth, only by LWIR absorption band radiation which is clearly false.

Hugs
Reply to  Bernard Lodge
October 9, 2015 1:23 pm

rgb says

LWIR from the surface is immediately absorbed by CO2 on its way to infinity. It is immediately transferred by collisions to O2 and N2, nearly always before the CO2 re-emits the same quantum of energy it absorbed, effectively maintaining the temperature of the air to correspond with that of the ground so that it stays close to in balance near the surface

I’m not worthy agreeing, but I must say it pays listening to him.

rishrac
Reply to  willhaas
October 8, 2015 2:43 pm

Both Venus and Mars had oceans of water which are now gone. Both planets do not have a magnetic field. Mars does have spots of magnetic fields, but nothing planet wide.
As part of the ongoing effort to prove AGW is real, a study was done to show the amount of incoming watts/meter as to watts/meter outgoing. The results showed a net retention. Over the last 20 years that is an enormous amount of heat. Has any study been done to refute that? No, I haven’t seen it. Something is wrong, either there is just an incredible amount of heat somewhere or the people at AGW lied. Take into consideration that the amount that should be retained should be increasing year after year as more co2 has been added. Anybody remember the tipping point where run a way global warming occurs? Not happening is it?

willhaas
Reply to  rishrac
October 8, 2015 9:29 pm

Over at least the past 500 million years, the Earth’s climate has been stable enough for life to evolve. We are here. In all that time CO2 has been at time more than 10 times what it is today yet no tipping point was ever encountered. In terms of the radiant greenhouse effect, the primary greenhouse gas in the Earth’s atmosphere is H2O and it provides negative feed backs to changes in other greenhouse gases so as to mitigate any effect they might have on climate. The feed backs have to be negative for our climate to have been stable over at least the past 500 million years.
The temperature at one bar in the atmosphere of Venus compared to the temperature at one bar in the atmosphere of the Earth can all be explained by the planet’s proximity to the sun. The more than 96% CO2 on Venus, compared to Earth’s .04%, does not provide a measurable radiant greenhouse effect. This is consistent with the idea that the climate sensitivity of CO2 equals 0.0.

rishrac
Reply to  willhaas
October 9, 2015 6:38 pm

When this debate, if you want to call it that, started I thought the co2 levels were approaching 8,9,10%. Then I found out 0.039 %. Are you kidding me? I asked a chemist friend of mine about this… his reaction was ” we re stuffing the atmosphere with co2″… then I asked him, “how much? .. he didn’t know. I told him 0.039%… the stunned look and silence spoke more than words. At these levels climate sensitivity is effectively 0.

Reply to  rishrac
October 9, 2015 9:06 pm

rishrac,
You’d better get back to your chemist friend and tell him that you misled him with old information.
It’s no longer 0.039%, it’s now increased all the way to 0.04%. 🙂

October 8, 2015 12:20 pm

Pretty cool, but this does not prove that TSI dominates climate. It does, however, suggest that Bindoff et al. 2013 might have selected the wrong TSI reconstruction and, perhaps, the wrong temperature record.
I like the modesty of your conclusion. “Might”, “maybe”, “perhaps” and such are my favorite words in this debate. With competing models on file we have competing anticipations (predictions, scenarios, expectations, calculated results, etc) of the future, and we ought to have a better idea 20 years from now of whether any of them are trustworthy for the rest of the century.

Tom in Florida
October 8, 2015 12:43 pm

It really doesn’t matter any more. The Russians slipped missiles into Iran under the cover that they were going to Syria. Iran will use them against Israel within 3 years tipped with nukes that the Obama backed deal let them have. So who cares whether the “average global temperature” varies a few degrees whatever the cause. Now where did I leave my Dad’s 1950’s bomb shelter plans?

Richard Mallett
Reply to  Tom in Florida
October 8, 2015 1:51 pm

… and if Iran uses their nuclear missiles against Israel, Israel will use their nuclear missiles at Dimona against Iran; and in Florida, you can sit back and watch it all on TV.

u.k.(us)
Reply to  Tom in Florida
October 8, 2015 2:00 pm

Nukes ??
Come on, the retaliation kinda kills all the fun.

u.k.(us)
Reply to  u.k.(us)
October 8, 2015 2:32 pm

Can you imagine spending 20 years in a missile silo, or on a submarine, just waiting for the “go” message.
Somebody’s gotta do it.

Marcus
Reply to  u.k.(us)
October 8, 2015 2:36 pm

You’re talking about a cult that WANTS to die !!!!! ” Mutually Assured Destruction ” means nothing to them !!!! 72 year old virgins are better than goats ya know

richard verney
Reply to  u.k.(us)
October 9, 2015 1:46 am

But you fail to take account that it is the religion of peace, or so our political masters keep telling us. Do you think that empirical observational ground based evidence tells a different story? Does that sound a familiar story in the cAGW debate?

Hugs
Reply to  u.k.(us)
October 9, 2015 1:32 pm

’re talking about a cult that WANTS to die !!!!! ” Mutually Assured Destruction ” means nothing to them !!!! 72 year old virgins are better than goats ya know

I’m sure this is on topic, but I think Iran is a different bird than the ISIS beard boys.
And, every time somebody is worried over Israel being nuked I wonder what was the biggest lesson of WWII? It was that the more Jews Hitler killed, the more arms Jews collected after the war. You can’t beat Israel in preparedness for war. Now if there is somebody who could end up be nuked, it is not Israel, I tell you.
(Godwin)

John Whitman
October 8, 2015 2:32 pm

Following quoted from SCC15
“Acknowledgements,
. . .
All of the research in this collaborative paper was carried out during the authors’ free time and at their own expense, and none of the authors received any funding for this research.”

NSF, NOAA and NASA gave Shukla’s IGES tens of millions of dollars in gov’t funds for a minimal amount of some minimal type of research. Yet NSF, NOAA and NASA gave zero for Soon, Connolly and Connolly to do some unique solar focused research.
John

Marcus
Reply to  John Whitman
October 8, 2015 2:39 pm

Can you imagine if the the funding was …GASP ..equal !!!! This discussion would have ended years ago and we’d be worrying about ” Global Cooling ” !!!!

Reply to  Marcus
October 8, 2015 3:12 pm

Marcus,
Or, we would be worrying about things that might really be a problem:
http://www.express.co.uk/news/science/610777/NASA-asteroid-warning-86666-near-miss-planet-Earth-48-hours

Editor
Reply to  John Whitman
October 8, 2015 3:50 pm

!!!

Peter Sable
October 8, 2015 3:00 pm

BTW the original paper is definitely worth reading. The first half of the paper is full of information about all the problems with historical temperature estimates and historical TSI estimates. Really good discussion about this in the paper, you get a very good sense that we just don’t have enough quality data to make firm conclusions about anything.
Unfortunately they should have stopped there… the human instinct to always find a cause whether one exists or not was apparently irresistible..
Peter

bit chilly
Reply to  Peter Sable
October 9, 2015 2:51 am

i agree peter . the actual discussion within the paper is as good a commentary on the problems surrounding the science as can be seen anywhere. as for the conclusion, i am quite happy for people to have their own opinions as long as the un and national governments do not waste trillions of tax payer dollars as a result.

John Whitman
October 8, 2015 3:47 pm

Let’s take a vote. Who agrees or disagrees with my below guess.
Based on her book ‘Merchants of Doubt’ I can guess what a wildly frantic Naomi Oreskes would say about the SCC15 being published without any funding outside of the author’s own funds. She would say, but but but years ago one of them got some funding from a source she disapproved to do a previous research paper and since they didn’t disclose in this paper about that previous funding on another paper then this paper and authors are immoral and evil.
Well . . . .
John

Reply to  John Whitman
October 8, 2015 3:52 pm

John,
I don’t think the funding matters, only the validity of the science.

Evan Jones
Editor
Reply to  lsvalgaard
October 9, 2015 4:22 am

Bingo, Dr. Svalgaard. (For Anthony’s study on climate stations, we rely on elbow grease.)

u.k.(us)
Reply to  John Whitman
October 8, 2015 3:53 pm

Evil

u.k.(us)
Reply to  u.k.(us)
October 8, 2015 3:55 pm

Just kidding.

John Whitman
Reply to  John Whitman
October 8, 2015 4:07 pm

lsvalgaard on October 8, 2015 at 3:52 pm
John,
I don’t think the funding matters, only the validity of the science.

Leif,
You are right.
And I am still deeply concerned that, to the people llke Naomi Oreskes, Michael Mann and the ‘team’ from IPCC AR3/AR4/climategate(s) fame, it is legend that they think it deeply matters who funded any research/paper significantly critical of the warming hypothesis/theory.
John

Editor
Reply to  John Whitman
October 8, 2015 4:38 pm

+1

October 8, 2015 11:49 pm

Allan MacRae wrote:
National Post – reader’s comment
5 October 2010
Excerpt from my 2010 post below::
“Climate sensitivity (to CO2) will be demonstrated to actually be an insignificant 0.3-0.5C for a hypothetical doubling of atmospheric CO2.”
I suggest 0.44C is between 0.3C and 0,5C.
My guess today is the correct value of ECS is even lower, if ECS exists at all in the practical sense.
As we wrote with confidence in 2002:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
Regards, Allan
http://www.pressreader.com/canada/national-post-latest-edition/20101005/282694748507765/TextView
I have researched the field of climate science and manmade global warming since 1985.
I have found NO EVIDENCE to support the hypothesis of dangerous manmade global warming, and credible evidence that any such manmade warming is insignificant*, and the modest global warming that has occurred since 1975 is almost entirely natural and cyclical.
I believe Earth is now ending a 30-year warming half-cycle, probably associated with the Pacific (multi-) Decadal Oscillation, and is now entering a 30-year cooling cycle.
We’ll see who is correct.
Notes:
* Climate sensitivity (to CO2) will be demonstrated to actually be an insignificant 0.3-0.5C for a hypothetical doubling of atmospheric CO2.
IPCC computer models typically assume a value about ten times higher to produce their very-scary global warming scenarios.
How can this be?
Doubling of CO2 would theoretically cause global warming of 1 degree C, in the absence of feedback amplifiers. The IPCC assumes, without any evidence, that these feedbacks are strongly positive. There is significant evidence that these feedbacks are, in fact, significantly negative. See work by Richard Lindzen of MIT, Roy Spencer, and others.
PPS:
There is a significant warming bias of just under 0.1C per decade in the Surface Temperature data of both Hadley CRU and NASA GISS, and this data cannot be used for any serious scientific research.
The ClimateGate emails further reveal the shoddy quality of the CRU dataset, and the lack of competence and ethics of those who are involved in its preparation and dissemination.
The NASA GISS dataset, if anything, is less credible than CRU.

Reply to  Allan MacRae
October 10, 2015 6:42 pm

To be clear:
There is increasing evidence that the sensitivity of climate to increased atmospheric CO2 is so low as to be practically inconsequential.
Regards, Allan

ren
October 9, 2015 12:22 am

Cosmic rays are high.
http://oi60.tinypic.com/n5kgt3.jpg
Location of the polar vortex is bad for northeastern North America.
http://oi57.tinypic.com/2nhko3r.jpg

Hugs
Reply to  ren
October 9, 2015 1:40 pm

Beautiful Auroras photographed by amateurs (well, pretty professional they look)comment image
More here (look at pictures at Oct 8)
https://www.taivaanvahti.fi/observations/browse/pics/1042017/observation_start_time

October 9, 2015 2:58 am

This suggests the effect of CO2 doubling is less than 0.03 deg C rather than the ridiculous 6 deg C.
http://wattsupwiththat.com/2010/03/08/the-logarithmic-effect-of-carbon-dioxide/

rishrac
Reply to  Tony
October 9, 2015 7:09 am

That would be about right . I’ve calculated that co2 adds to warming about 3% of the total increase. Since the IPCC uses 270 as the base we’ve added about 48% more co2 which now stands at roughly 400 ppm. Given the increase that they say is about 0.5 C that comes out to 0.015 C increase in temperature, which is roughly half of 0.03 C. Calculations were not based on the log of co2, but dismissing positive feedback. In my mind latent heat from water vapor doesn’t make its way back into the atmosphere, there is nothing for that heat to bounce off to be retained. It’s cold up there to start with, and there is more space/cubic meter, also there is something I consider an impedance mismatch, like cars on a freeway going from 2 lanes to 8. It violates laws on thermodynamics. It comes down to whether more heat is retained or lossed in the upper atmosphere. That’s why I keep bringing up the study about the incoming and outgoing watts/meter. It wasn’t in balance when they published it (it was at least 100 w/m^2) how many ^2 meters on earth times the number of days times the number of years)) and it should be increasing every year since they published it. Thats where they got the tipping point. So much co2, so much retention, viola! Tipping point! You can see it in the graphs and projections the IPCC make. It is an enormous amount of heat. A little 0.01 C increase in temperature increase is ridiculous in light of that factor, hottest year or not. If AGW were true nobody in their right mind would be questioning it, it’d be a fact. It isn’t and it didn’t happen. Other factors are certainly at work in the recent run up. Scientificly, AGW is a dead theory, politically it’s a zombie theory, just won’t die.

Patrick
October 9, 2015 4:38 am

So, the fundamental question still remains, does the sun affect “climate” on this rock?

Tom in Florida
Reply to  Patrick
October 9, 2015 6:06 am

No the question is “do CHANGES in the Sun affect climate on this rock?”

Peter Sable
Reply to  Tom in Florida
October 9, 2015 4:12 pm

No the question is “do CHANGES in the Sun affect climate on this rock?”

More accurately, “do changes in the Sun affect the climate on this rock where the affects are distinguishable from dozens of other causes in the time frame and accuracy of our current data set”.
Because if not all of those are true, then you can’t see it statistically. Even if there is a physical effect, it’s not currently visible. This is where the data all seems to point – it’s not currently visible – at least without a lot of dubious post-hoc tweaking of the data, which this paper both demonstrates exists in the existing literature and then unfortunately goes on and does to itself…
Peter

October 9, 2015 5:20 am

Can i say something?
Medium confidence that CO2 sensitivity is not above 6 K… Medium..
6K is what the entire atmospheric CO2 content approximately contributes to the greenhous effect on Earth.
(6 K = 20% of total Earth Greenhouse effect of around 30 K)
How nice that IPCC is “medium confident” that ONE CO2 doubling does not give more greenhouse effect than the entire atmospheric CO2 content 🙂
( PS: From models we know that one CO2 doubling should give around 9 times as much forcing as the entire CO2 column )

Mike McMillan
October 9, 2015 5:48 am

“…a major survey of the degree to which these [weather station] instrument housings were correctly placed and maintained in the United States was made by a group of 600-odd followers of the blog Climate Audit;
I didn’t know Climate Audit was duplicating our Surface Stations project.

October 9, 2015 6:00 am

Thanks Andy for writing this positive summary of our collaborative paper with Willie Soon which was recently published in ESR!
I’m just on lunch break and so don’t have time to respond to the various comments above at the moment. But, I will try and respond in more detail this evening (Greenwich Mean Time) and/or over the weekend.
One point that is probably worth mentioning (and has already been alluded to by several commenters) is that our analysis in Section 5 of the paper specifically focuses on the updated Hoyt & Schatten TSI reconstruction. This was one of the TSI reconstructions that the IPCC neglected for their latest CMIP5 hindcasts – mostly they just used Wang et al., 2005. So, since we had already compared our new mostly-rural Northern Hemisphere temperature reconstruction to the CMIP5 hindcasts in Section 4.5, we felt it would be useful to also compare it to the Hoyt & Schatten TSI reconstruction.
Obviously, this comparison with the Hoyt & Schatten reconstruction (which Andy summarised in this post) will appeal to many readers of this blog who have been arguing that solar variability is the primary driver of climate (and not CO2!). However, if you read Section 2 of our paper, you’ll probably agree with us that the debate over which (if any!) of the TSI reconstructions is most reliable has not yet been satisfactorily resolved. However, as we discuss in Section 5.1, there is considerable justification for arguing that the Hoyt & Schatten TSI reconstruction is at least as plausible as the Wang et al., 2005 reconstruction that the IPCC relied on for their latest reports.

Reply to  Ronan Connolly
October 9, 2015 6:10 am

there is considerable justification for arguing that the Hoyt & Schatten TSI reconstruction is at least as plausible as the Wang et al., 2005 reconstruction that the IPCC relied on for their latest reports
Not even my old colleague Ken Schatten believes their old reconstruction is valid.

John Whitman
Reply to  lsvalgaard
October 9, 2015 8:15 am

Leif,
I’ve not run across Ken Schatten statements on their old reconstruction, which isn’t surprising since I’m not a solar scientist.
Do you have some pointers about where Ken Schatten has made statements in the literature or at conferences or on blogs about their old reconstruction or has he publically said anything about any updates /commentary made of works either supporting him or critiquing him?
John

Reply to  John Whitman
October 9, 2015 9:19 am

Ken does not go around saying things like that. In discussions with me, he has said the the H&S TSI was a mistake. You will have to take my words for that. But, perhaps the fact that Ken is a coauthor of http://www.leif.org/research/Reconstruction-of-Group-Number-1610-2015.pdf lends some weight to his lack of clinging to old mistakes.

Reply to  lsvalgaard
October 9, 2015 9:47 am

John, it may be of interest to actually read how H&S constructed their TSI series:
http://www.leif.org/EOS/H-and-S-TSI-Paper.pdf
then you can judge for yourself how much credence you would give them.

John Whitman
Reply to  Ronan Connolly
October 9, 2015 11:18 am

lsvalgaard on October 9, 2015 at 9:19 am
&
lsvalgaard on October 9, 2015 at 9:47 am

Leif,
I think one of the highlighted papers focused on by SCC15 is the Scafetta & Willson, 2014 update to the Hoyt & Schatten 1993 dataset. So, in Schatten’s private discussions with you that you previously mentioned, did Schatten refer to his original Hoyt & Schatten 1993 or did he refer to Scafetta & Willson 2014 update the H&S1993?
As to Schatten (of H&S1993) collaborating with you on the recent Reconstruction of Group Number project, doesn’t each piece of research stand on its own reality based merits independent of the other? That a former work associated with Schatten differs with a later work associated with Schatten doesn’t necessarily mean the later has more reality based merit. At least I wouldn’t think so, just the reality based merits of each one would seem to be the only importance.
John

Pamela Gray
Reply to  John Whitman
October 10, 2015 9:45 am

I disagree. Papers remain in journals even if, at a later date, they are considered wrong. The only time papers are removed is if malfeasance is later shown. Research is a long-term endeavor. Mistaken theories from the past are generously represented in the literature. And there is no reason to believe that current articles are any less represented by theories that one day will also be shown to be mistaken. Which is why anyone engaging in scientific debate had better be schooled in research critique methods, else the phrase uneducated fool comes to mind.

John Whitman
Reply to  John Whitman
October 13, 2015 3:41 pm

Pamela,
The papers should stand on their own forever.
Papers thought wrong can still be resurrected if any subsequent knowledge shows reality supports them. That is science.
There is no statute of limitations on a paper thought wrong, it remains in the corpus forever.
John

Reply to  John Whitman
October 13, 2015 4:31 pm

The Scafetta and Willson ‘update’ of H&S is just adding data since 1998 so is not relevant for the long-term change. I think the H&S TSI is simply dead by now and should not be used for anything, unless, of course, you wish to use it to bolster a pet theory.

Pamela Gray
Reply to  John Whitman
October 13, 2015 6:56 pm

Leif, as an armchair amateur, I also think the H&S reconstruction, new and old, is dead in the water no matter how many more decades of data is amended to it. Yet it should remain in the literature as a paper trail of the scientific method whereby new paradigms or data analysis replace older ones. You and the team developed a new reconstruction BECAUSE there was an archived hard copy (original?) direct observation paper trail (which you included in your early papers and presentations regarding this subject- loved it).
As an aside, unfortunately, temperature record paper trails will be a lot harder to find and examine. To the detriment of the science behind climate change.

Editor
Reply to  Ronan Connolly
October 9, 2015 2:53 pm

Ronan, I totally agree. No one has created a sound TSI reconstruction to date. All of them can be picked apart. Plus the data we have now is of dubious quality. But, for AR5 to claim high confidence when they ignored so many plausible high TSI variability reconstructions in their study is inexcusable. Hoyt and Schatten is only one of many as you point out in the paper.

Roger Clague
October 9, 2015 10:47 am

For the gas laws to apply the temperature ( T), pressure (p ) and volume v) each have one value at any one time.
The T and p of the atmosphere do not have a single value at any one time they have range of values at all times.
The gas laws cannot be applied to the atmosphere

KTM
October 9, 2015 1:04 pm

They can claim “high certainty” all they want, but they are still trying (and failing badly) to predict the future of temperatures based on CO2 as the global control knob.
This spectacular failure of the models to predict temperatures seems to be the “new normal” for climate and weather.
http://realclimatescience.com/2015/10/final-joaquin-scorecard/
When climate and weather experts use a cone of prediction that sends a major hurricane slamming into the Eastern seaboard, the true confidence of that prediction should be reflected in the size of the cone. To miss so badly is a indelible stain on their credibility.

ren
October 9, 2015 11:46 pm
ulriclyons
October 10, 2015 8:52 am

This all nonsense, surface temperatures do not follow the solar forcing. Cooling in the 1970’s was due to a stronger solar signal increasing positive NAO/AO, which then cools the AMO (and Arctic), and increases La Nina, both which increase continental interior rainfall and cause further surface cooling. Much of the surface warming from 1995-2005 is the corollary of that, with declining solar forcing increasing negative NAO/AO, warming the AMO (and Arctic), and reducing continental interior precipitation, further adding to the global surface mean temperature.
http://snag.gy/HxdKY.jpg

Pamela Gray
Reply to  ulriclyons
October 10, 2015 10:02 am

Ulric, come on.
First, let us hope you have not committed plagiarism. Your graph is either your own reconstruction, or contains data from some other source/person without citation.
Second, if solar parameters forces a trend in NAO/AO, it must be beyond the error bars of natural NAO/AO variation. Is it? And which solar parameter(s) are you talking about? Who’s theory is it and where are your links and citations? As it stands, CO2 forcing appears more plausible than your input to this discussion.
I do not just pick on you. I call to task anyone who speaks of a theory other than natural intrinsic variation, including Earth’s weather and climate pattern extremes that trend with Earth’s orbital mechanics around the Sun (because the Sun isn’t the thing that wobbles around us…duh).

ulriclyons
Reply to  Pamela Gray
October 10, 2015 10:43 am

“..if solar parameters forces a trend in NAO/AO, it must be beyond the error bars of natural NAO/AO variation.”
No the natural variability of the NAO/AO is solar forced at down to daily scales. The trend is a just a construct.
http://cmatc.cma.gov.cn/www/ckeditor_upload/20141015/dd7087e3c34043e4b8970fc438b4c7c8.pdf
http://onlinelibrary.wiley.com/doi/10.1029/2003GL017360/abstract
http://www.leif.org/EOS/Tin_rev.pdf
Solar plasma data source: http://omniweb.gsfc.nasa.gov/form/dx1.html

ulriclyons
Reply to  Pamela Gray
October 10, 2015 10:48 am

“As it stands, CO2 forcing appears more plausible than your input to this discussion.”
I can’t buy that, because increased CO2 forcing will increase positive NAO/AO conditions, but the NAO/AO became increasingly negative from the mid 1990’s.
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-3-5-6.html

ulriclyons
Reply to  Pamela Gray
October 10, 2015 11:08 am

In the replies to the third comment here, you can read my forecast for NAO/AO variability through to next Spring:
http://blog.metoffice.gov.uk/2015/04/13/more-warm-weather-this-week-but-whats-in-store-for-the-summer/#comments

October 11, 2015 4:16 pm

Hi all,
I haven’t time to respond in detail to all of the comments on our paper which Andy reviewed in this post. But, below are some comments on the debate over the Total Solar Irradiance (TSI) reconstructions which most of the discussion seems to be currently focusing on. For anyone who hasn’t had the chance to read the paper yet, I would recommend reading Section 2 as most of the following is discussed in more detail there.
Several people have already noted with some dismay the fact that even within the satellite era, the various estimates of mean TSI at 1 Astronomical Unit (AU) vary by up 14 W/m2:
http://s10.postimg.org/extg5whop/Fig_02_satellite_era_raw.jpg
So, it is probably worth noting that even these estimates involve a certain amount of adjustment of the actual raw measurements. When researchers are looking at quantifying solar variability, it is important not to conflate the variations in the irradiance leaving the Sun (“solar variability”) and the variations in the irradiance reaching the Earth due to the non-circular orbit of the Earth around the Sun. During the Earth’s annual orbit of the Sun, the Earth-Sun distance gradually increases and decreases in a periodic manner (since the Earth’s orbit is elliptical). The TSI reaching the Earth (and hence our TSI satellites) will increase and decrease throughout the year accordingly. Therefore, a lot of the variation in the raw TSI measurements throughout the year doesn’t reflect actual solar variability, but is a consequence of our elliptical orbit:
http://s17.postimg.org/p28oh0jfz/Fig_01_daily_TOA.jpg
Mean daily variability of TSI at the top of the Earth’s atmosphere over the annual cycle. Averaged from the SORCE satellite mission (2003-2015).
For this reason, when the satellite groups report the “1AU” data, all of the measurements have been adjusted from what was actually measured by the satellites to what they would have measured if they had remained at a fixed distance from the Sun, i.e., 1 Astronomical Unit (AU) – the average distance of the Earth from the Sun. However, if we are studying the effects of solar variability on the Earth’s climate then it is probably the unadjusted measurements which are more relevant.
If you compare the two figures above you’ll probably notice that most of the “trends” in the first figure would be swamped by the seasonal 88 W/m2 cycle. Fortunately, over timescales less than a few thousand years, the Earth’s orbit is very constant and predictable (although people interested in longer timescales will probably note the significance of the “Milankovitch cycles”). More importantly, if we are only looking at solar variability from year-to-year, much of the seasonal variability should average out over the annual cycle. This is why the various satellite groups correct all their data to 1AU and many of the TSI reconstructions are discussed in terms of annual trends. But, if any of you are wondering why most of the TSI reconstructions focus on the TSI at 1AU distance and mostly ignore the actual TSI reaching the Earth, then I’d agree this is a good question…
Still, for now, let’s wave our hands and mutter something like, “we’re only interested in the annually averaged trends…”, and get onto the next problem: how do we stitch together the various measurements from each of the individual TSI satellites to construct a reasonably accurate estimate of TSI trends over the entire satellite era (1978-present)?
As we discuss in the paper, this is a very tricky problem and the various attempts which have been made to do so have each involved a number of subjective decisions and assumptions. Ideally, we might hope that the TSI community would be open and frank about the uncertainties associated with these attempts and warn the users of the estimates to treat them cautiously. However, instead, the debate over the various rival composites has become highly emotive and opinionated.
In particular, the rivalry seems to be particularly bitter between the PMOD group (who claim that TSI has generally decreased over the entire satellite era) and the ACRIM group (who claim that TSI generally increased from 1980-2000, but has since been decreasing). There is also a third composite, RMIB, which suggests that TSI has been fairly constant over the entire period (except for the ~11 year cycle).
The PMOD composite has an obvious appeal to those arguing that recent global warming is due to CO2 and not solar variability, while the ACRIM composite would obviously appeal to those arguing for a strong solar influence on recent temperature trends – especially those arguing that there has been a “hiatus” in warming since ~2000. But, nature doesn’t care what we think it should be doing! So, when we are assessing which (if any!) of the composites are most reliable, we should do our best to avoid letting our own prior expectations bias our assessment. We should also recognise the possibility that (on this highly politicised topic), the groups might also have their own prior expectations, and confirmation bias may have influenced their subjective decisions and assumptions.
Here’s some interesting quotes from some of the researchers involved (references given in our paper):

The fact that some people could use [the upward TSI trend of the ACRIM composite] as an excuse to do nothing about greenhouse gas emissions is one reason we felt we needed to look at the data ourselves. – Judith Lean (of the PMOD group), August 2003

It would be just as wrong to take this one result and use it as a justification for doing nothing as it is wrong to force costly and difficult changes for greenhouse gas reductions per the Kyoto Accords, whose justification using the Intergovernmental Panel on Climate Change reports was more political science than real science. – Richard Willson (of the ACRIM group), August 2003

Here’s a perspective from another researcher who has recently joined the debate:

A conclusive TSI time series is not only desirable from the perspective of the scientific community, but also when considering the rising interest of the public in questions related to climate change issues, thus preventing climate sceptics from taking advantage of these discrepancies within the TSI community by, e.g., putting forth a presumed solar effect as an excuse for inaction on anthropogenic warming. – Pia Zacharias, 2014

We would recommend treating all three of the current TSI composites (PMOD, ACRIM and RMIB) cautiously.

Before the satellite era, we have to rely on indirect proxies for TSI. Below are some of the main solar proxies we found.
Solar proxies with data pre-1800:
http://s16.postimg.org/nmifoqn6d/fig_4_long_solar_proxies.jpg
Solar proxies which don’t have data before the 19th century:
http://s3.postimg.org/t3u5fe2f7/fig_05_short_solar_proxies.jpg
By far the most popular of the above proxies are the “Sunspot Numbers” (SSN) and the cosmogenic nuclides. The cosmogenic nuclides are very indirect proxies of solar variability – as solar activity increases, so does the strength of the solar wind, reducing the amount of cosmic rays reaching the lower atmosphere, and hence altering the rate of cosmogenic nuclide formation. However, they’re basically our main proxy for solar activity before the 17th century.
The popularity of the Sunspot Numbers is understandable. Aside from the cosmogenic nuclides, they’re basically the only solar proxy with (almost!) continuous records back to 1700 (or earlier). Also, unlike the cosmogenic nuclides, they are direct measurements of an aspect of solar variability, i.e., sunspots.
Moreover, when we compare the sunspot cycles to the TSI measurements during the satellite era, they seem to be closely related:
http://s22.postimg.org/aetxu4gpt/fig_07_SSN_vs_satellite_TSI_with_SCs.jpg
If the PMOD composite is reliable, then the relationship between SSN and TSI is very close, i.e., almost an exact match for Solar Cycles 21 and 22, and a reasonable match for Solar Cycle 23 and 24. On the other hand, if the ACRIM composite is reliable, then the relationship between SSN and TSI is good, but not perfect.
So, many of the groups which rely on the PMOD composite (or one of the semi-empirical TSI models like SATIRE which imply similar trends to PMOD) have basically assumed that the changes in TSI during the pre-satellite era are almost exactly proportional to SSN.
Indeed, although the Wang et al., 2005 reconstruction initially sounds very complex and technical, you can see from below that it is basically the same as a rescaled version of Hoyt & Schatten’s SSN dataset:
http://s27.postimg.org/8fe2dlucz/Fig_09_evolution_of_Lean_et_al.jpg
As far as I remember (?) from previous WUWT threads, Leif Svalgaard reckons our best estimate of TSI is a simple rescaling of SSN… provided we use his version of the SSN, i.e., the new Clette et al., 2014 SSN dataset. Is that correct, Leif?

As a digression, it might be helpful to briefly mention something about some of the researchers we’ve been talking about above.
Doug Hoyt was the Principal Investigator in charge of the first of the TSI satellites, i.e., NIMBUS7/ERB. In 1993, after NIMBUS7 had finished, together with Ken Schatten (who Leif mentioned earlier), they used this data to calibrate their Hoyt & Schatten TSI reconstruction (back to 1700) which we have been discussing.
Like Leif, they have also been concerned with some potential problems with the conventional SSN dataset, i.e., the Wolf numbers, a.k.a., Zurich numbers, a.k.a., International SSN. A few years later, Hoyt & Schatten also developed a new SSN dataset – Hoyt & Schatten, 1998, a.k.a., the “Group Sunspot Number” (GSN).
The Hoyt & Schatten GSN dataset has been the main SSN dataset used by the TSI community since then, and most of the TSI reconstructions have relied on this for their SSN.
Recently, Leif and several colleagues have tracked down some further information on some of the sunspot observers, which they have used to develop a third SSN dataset (Clette et al., 2014). In the figure above (labelled “Solar proxies with data pre-1800:”) we compare and contrast all three of the SSN datasets, i.e., plots (a)-(d). In my opinion, all three datasets are broadly similar (particularly for the post-1800 period we were interested in), but there are definitely some noticeable differences between them in the relative magnitudes of different peaks, e.g., the Clette et al., 2014 dataset has slightly reduced sunspot numbers for the late 20th century.
Dick Willson (quoted above) was and still is the Principal Investigator in charge of the ACRIM satellites (1980-present), and is also the main developer of the ACRIM TSI composite. Although the IPCC seem to hate the ACRIM TSI composite and to prefer the PMOD composite instead, all three of the TSI composites are heavily based on the underlying ACRIM satellite data. As an aside, it seems kind of ironic to me that PMOD (and their supporters) are so dismissive of the ACRIM group’s composite yet are happy to use the ACRIM group’s satellite data for their analysis.
Nicola Scafetta joined the ACRIM group a few years back, and in Scafetta & Willson, 2014 (see their appendix), they updated the Hoyt & Schatten, 1993 TSI reconstruction using the ACRIM composite instead of Hoyt’s NIMBUS7. This updated TSI reconstruction is the one we consider in Section 5 of the paper.
Judith Lean (also quoted above) together with Claus Froehlich developed the PMOD TSI composite. She also was one of the main developers of the Wang et al., 2005 TSI reconstruction which the CMIP5 climate modellers were recommended to use.

By the way, Leif mentioned above:
lsvalgaard October 9, 2015 at 6:10am and 9:19 am

Not even my old colleague Ken Schatten believes their old reconstruction is valid.

Ken does not go around saying things like that. In discussions with me, he has said the the H&S TSI was a mistake. You will have to take my words for that. But, perhaps the fact that Ken is a coauthor of http://www.leif.org/research/Reconstruction-of-Group-Number-1610-2015.pdf lends some weight to his lack of clinging to old mistakes.

We were in touch with Doug Hoyt (and also, separately, with Dick Willson and Nicola Scafetta) before and after finishing the paper, and they still seem to feel it was a useful reconstruction – although, if you read their papers (and also Hoyt & Schatten’s 1997 book), they were (and still are) quite upfront about the uncertainties and limitations of the available data and challenges in deriving a reliable TSI reconstruction with the limited available data.
But, I haven’t contacted Ken Schatten, so I don’t know how he currently feels about the Hoyt & Schatten, 1993 reconstruction. Leif, were there particular aspects of the reconstruction that he now regards as problematic, or is it the whole thing? After all, unlike most of the TSI reconstructions (which are primarily derived from SSN and/or cosmogenic nuclides), they used multiple solar proxies.

Getting back to the solar proxies, if we rely on the PMOD composite (or SOLSTICE model), then on the basis of Solar Cycles 21, 22 and 24 (and to a lesser extent, 23) it is tempting to assume sunspot numbers are an almost exact proxy for TSI. In that case, simply calibrating SSN to the TSI measurements during the satellite era would give us a reasonable TSI reconstruction for the entire period from 1700 (or 1610 if you use Hoyt & Schatten’s GSN).
If so, then the main priority in assessing the TSI reconstructions (post-1700) would probably be in improving the reliability of the SSN dataset. Leif has argued on other threads that his new dataset (Clette et al., 2014) is the best of the three, and that this should be the new basis for TSI reconstructions.
A corollary of that assumption would be that TSI reconstructions which substantially differ from a simple rescaling of SSN are “unreliable”. This would apply to the updated Hoyt & Schatten reconstruction, since – unlike most of the TSI reconstructions – they used multiple solar proxies and not just SSN. Unlike the heavily SSN-based reconstructions, the Hoyt & Schatten, 1993 reconstruction implies the mid-20th century solar peak occurred in the 1940s rather than the late 1950s. It also implies a more pronounced decline in solar activity from the 1950s-1970s than the SSN datasets. Finally, Scafetta & Willson’s 2014 update to it used their ACRIM composite for the satellite era, instead of the PMOD composite.
But, is SSN actually an exact proxy for TSI? While several groups are quite adamant it is, and while there is a lot of literature supporting this assumption, we also found a lot of literature which suggests problems with the assumption – for a detailed discussion, see Section 2.2.
One problem is that sunspots are actually believed to decrease the TSI, since they are “dark spots” on the sun. Yet, we saw from the above figure that when sunspot numbers increase, TSI increases! The standard explanation is that when sunspot activity increases, this coincides with an increase in the number and area of faculae (“bright spots”) which in turn leads to an increase in TSI. So, when sunspot activity increases, the extra sunspots decrease TSI, but the extra faculae increase TSI. During the satellite era, the facular increase seems to have outweighed the sunspot decrease – leading to the (initially counterintuitive) result of increases in SSN being correlated to increases in TSI.
The theory that SSN is an accurate proxy for TSI therefore assumes that the ratio between sunspot and facular areas has remained relatively constant over the entire sunspot record. Unfortunately, although the existence of faculae has been known since near the beginning of sunspot observations, the “bright spots” were harder for the earlier observers to record than the “dark spots”, and so we don’t have many systematic facular measurements until quite recently (e.g., the San Fernando Observatory’s measurements which began in 1988).
The Royal Greenwich Observatory (RGO) did maintain a continuous record of faculae from 1874 to 1976 – see plot (b) in the post-1800 solar proxies figure above. Interestingly this suggests that the mid-20th century solar peak occurred in the 1940s (as Hoyt & Schatten, 1993 suggested) rather than the late-1950s (as SSN suggests), and that solar activity decreased between 1940-1970 (again agreeing with Hoyt & Schatten, 1993, but not SSN). e.g., see Peter Foukal’s recent pre-print on arXiv.
This would indicate that the ratio between faculae and sunspots did not remain constant over the sunspot record, meaning that the key assumption behind SSN as a TSI proxy wouldn’t hold. But, the RGO observations were quite limited as they were based on white light measurements. So, some researchers have argued that they are unreliable. If so, then maybe the facular-sunspot ratio did remain constant after all… But, is that just wishful thinking? We don’t know, because if we discard the RGO observations then we’re basically back to relying on observations during the satellite era.
One possible argument in favour of the assumption that the faculae-sunspot ratio remains fairly constant is that there is a good match between F10.7 cm radio flux measurements (F10.7) and sunspot numbers over Solar Cycles 19-22. F10.7 is believed to originate from the coronal plasma in the upper atmosphere of the Sun – a different part of the Sun than the region where sunspots and faculae occur. So, if F10.7 is highly correlated to sunspot activity then this suggests a strong unity between multiple different solar activity components. This would be consistent with SSN being a strong proxy for TSI.
On the other hand, while F10.7 seems to have been closely correlated to sunspot numbers for Solar Cycles 19-22, the relationship was not as good for Solar Cycle 23. Having said that, Leif has done some interesting research on this apparent breakdown, and along with Livingston et al., he has come up with a plausible explanation as to why the relationship broke down for Solar Cycle 23 (references in Section 2.2.1). But, unfortunately, the F10.7 records only began in 1947, so for now we are limited to 5 complete solar cycles for our F10.7:sunspots comparisons…
Hopefully, this will give some of you a flavour of the complexities of the debate over TSI reconstruction, but if you haven’t already, I’d recommend reading Section 2 of our paper where we provide a more detailed discussion and review.

We think the updated Hoyt & Schatten TSI reconstruction involves some interesting approaches which the TSI reconstructions used for the latest IPCC reports didn’t.
e.g., it incorporates several other solar proxies as well as sunspot numbers and it doesn’t use the PMOD composite used by the IPCC-favoured reconstructions, but uses the ACRIM composite. Therefore, we think it is worth considering as a plausible reconstruction.
In Section 4.5, we compared our new mostly-rural Northern Hemisphere temperature reconstruction to the CMIP5 hindcasts, but the hindcasts didn’t seem to work whether using “natural and/or anthropogenic forcings”:
http://s8.postimg.org/iolfjopsl/Fig_25_comparison_with_CMIP5_hindcasts.jpg
Since the CMIP5 hindcasts didn’t consider the updated Hoyt & Schatten reconstruction, we decided to test this as well in a separate section. We found quite a good match that we felt was worth noting and discussing in more detail. This led to the analysis in Section 5.
But, as I think should be clear from the paper (and the above discussion), although Willie has used this reconstruction in several of his papers in the past, our literature review (and analysis of the data) for this new collaborative paper has made us very wary of all of the current TSI reconstructions…
What I do think it is fair to conclude from our analysis is,
1. If the TSI reconstructions used by the CMIP5 hindcasts are accurate then, the present climate models are unable to adequately explain our new mostly-rural Northern Hemisphere temperature reconstruction – regardless of whether they use “natural” (solar & volcanic) and/or “anthropogenic” (GHG and man-made aerosols) forcings. This suggests that the current climate models are leaving out some key factors which have played a major role in temperature trends since at least 1881. It also suggests that the IPCC’s conclusion that global temperature trends since the 1950s are “extremely likely” to be mostly anthropogenic should be revisited…
2. On the other hand, if the updated Hoyt & Schatten TSI reconstruction is accurate, then this indicates that Northern Hemisphere rural temperature trends since at least 1881 have been primarily driven by changes in solar activity.
3. If neither the Hoyt & Schatten nor the TSI reconstructions used by CMIP5 are accurate, then more research should be prioritised into developing new (more accurate) TSI reconstructions. Until then, the exact role solar variability has played on temperature trends since the 19th century will probably have to remain an open question.

In summary,

If a man will begin with certainties, he shall end in doubts; but if he will be content to begin with doubts he shall end in certainties. – Francis Bacon, The Advancement of Learning (1605)

Reviewing the extensive literature on this topic, it is striking how certain many researchers seem to feel that their TSI reconstruction and/or analysis is the best. We are of the opinion that much of this “certainty” is currently unjustified.

Pamela Gray
Reply to  Ronan Connolly
October 11, 2015 8:46 pm

But wasn’t the reconstruction a correction to previous weighting differences that have now been reconciled so that data is consistently numbered? In addition, other currently measured cyclic parameters consistently follow SSN and these parameters can be reconstructed from ice core proxies, which then allows a reasonable person to surmise solar activity during the Maunder Minimum. That this was done openly and with a rather large team of scientists, seems to me to be calling out to all scientists to re-address their own work in light of this correction. I would conclude that those who do NOT do this are unjustified. That would mean your work as well.

Reply to  Ronan Connolly
October 11, 2015 9:45 pm

If the TSI reconstruction supports somebody’s particular pet theory, reconstruction is clearly correct….
Here are some comments by Ken Schatten from our fist SSN workshop:
http://www.leif.org/research/SSN/Schatten.pdf
We do have a good proxy record of F10.7 back to the 18th century:
http://www.leif.org/research/Reconstruction-of-Solar-EUV-Flux-1740-2015.pdf
This record does not support the H&S TSI reconstruction.
The way out of this problem might be that H&S TSI is correct, and ALL other proxies are hopelessly wrong.

Reply to  lsvalgaard
October 15, 2015 2:29 am

Leif,
Thanks for these links!
Those comments by Ken Schatten on the various sunspot number datasets are good to know. My own views on the three different SSN datasets are pretty similar (although I realise his comments referred to a preliminary version of the Clette et al., 2014 dataset), but it is interesting to see his suggestion that the group numbers (G) may be capturing a slightly different aspect of solar variability than the sunspot numbers (S):

…Nevertheless, we do NOT KNOW, how different aspects of solar dynamo (e.g. B, Bphi, etc.) are affected by relative proportion of S and G!
4) Since our knowledge of the solar dynamo is limited, we cannot necessarily assume that G is a better indicator of solar magnetic flux, than by using BOTH S and G!

This is in keeping with our discussion of the various solar proxies in Section 2.2, where we discuss the possibility that each of the proxies may be capturing different aspects of solar variability. It also seems to agree with your observation in your second link that the diurnal variation of the geomagnetic field seems to be more closely related to G than S.
I think Ken Schatten’s closing line in that presentation echoes a main theme of our literature review:

The question remains, what is the best way to use ALL the information available about the distribution of sunspot groups and the number of sunspots on the whole disk, or sunspot areas, etc?

By the way, that second link is an interesting paper and an intriguing solar proxy. I see you’ve uploaded it to arXiv in June 2015. Are you also planning on submitting it for peer review? It seems like a nice link between several of the solar proxies, and I’d say it would be of great interest to the solar community.

Reply to  Ronan Connolly
October 15, 2015 6:53 am

By the way, that second link is an interesting paper and an intriguing solar proxy. I see you’ve uploaded it to arXiv in June 2015. Are you also planning on submitting it for peer review?
Has been submitted. Is under review.

Reply to  lsvalgaard
October 15, 2015 9:25 am

lsvalgaard October 15, 2015 at 6:53 am

Has been submitted. Is under review.

Great to hear! Hope the review process goes well!

richard verney
Reply to  Ronan Connolly
October 12, 2015 1:48 am

We know that climate models are poor on a regional basis, but do the climate models do each hemisphere separately, or only the globe as a whole?
The southern hemisphere may well act as a bit of a damper (due to the larger oceanic content), and may be more influenced by oceanic cycles particularly that of the Pacific.
If some climate models do the Northern Hemisphere separately, a comparison with the output of those models would be useful.
Your comparison seems to suggest that the models cannot in particular explain the 1920s to 1940s warming, nor the 1945 to 1965 cooling. Until there is satisfactory explanation of these events, the Climate Models should be viewed with extreme caution, so to the assertion that late 20th Century warming is not due to natural variation.

Reply to  richard verney
October 15, 2015 2:29 am

Richard,
This is an important point, and it is something we are looking to investigate in follow-on research. There are 3 separate, but complementary, approaches we have been considering:
1.) All of the CMIP5 output data can be downloaded from the CMIP5 website once you have created an account: http://cmip-pcmdi.llnl.gov/cmip5/data_getting_started.html
Most of this data is in NetCDF format (.nc), and so you would probably need to be (or to become!) familiar with handling data in this format. But, in principle, with a bit of work, it should be fairly straightforward to extract the hindcast trends for not just Northern Hemisphere land, but also each of the four regions we analysed in our paper. This would allow a more direct comparison of the CMIP5 hindcasts with our new mostly-rural hemispheric (and regional) temperature reconstructions.
However, one of our main conclusions in this paper is that the CMIP5 hindcasts only considered a small selection of the available datasets – even though the “detection and attribution” studies which were based on these hindcasts were the primary justification for the widely-cited IPCC claim that global temperature trends since the 1950s are mostly anthropogenic in origin. So, perhaps a more productive approach would be to carry out fresh hindcasts in a more systematic and comprehensive manner.
2.) So, another approach would be if one (or more) of the CMIP5 groups were to carry out new “detection and attribution” studies, taking into account the discussions in our paper. We have been contacting some of the CMIP5 groups to see if this is something they might be interested in.
3.) Some of the CMIP5 groups have provided publicly downloadable versions of their climate models which can be run at home on a Linux box (for instance).
e.g., NASA GISS: http://www.giss.nasa.gov/tools/modelE/
e.g., NCAR’s “Community Earth System Model” (CESM): https://www2.cesm.ucar.edu/models/current
Obviously you couldn’t run the models with as high a resolution as the groups can on their in-house dedicated hi-spec computer servers, but if you have (or can get) a cheap Linux box (or several) that you don’t mind setting aside to run simulations for a few months, and you are less ambitious in how computationally expensive you need your simulations to be, you should be able to get results that could be useful. To get an idea of how much computer time is required, typically the CMIP5 groups allowed ~2-3 CPU weeks for each run.
Having said all of that, I should point out that, if you’ve read our blog and/or our “Physics of the Earth’s atmosphere” papers, you will probably realise that Michael and I believe there are serious problems with how the current climate models treat energy transmission throughout the atmosphere. As a result, in our opinion, the current climate models dramatically overestimate the influence that infrared-active gases (i.e., greenhouse gases) have on atmospheric temperature profiles.
So, we don’t think that the current models are very useful in understanding past (and future!) temperature trends. We are of the opinion that focusing more on empirical observations and less on climate model outputs is probably a more effective approach to understanding the climate system.
But, Michael & I are both fans of Konrad Lorenz’ “morning excercise” regime for research scientists ;),

It is a good morning exercise for a research scientist to discard a pet hypothesis every day before breakfast: it keeps him young – (Konrad Lorenz, 1903-1989)

and also we are conscious of John S. Mill’s observations in his On Liberty book,

He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion… Nor is it enough that he should hear the opinions of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. He must be able to hear them from persons who actually believe them…he must know them in their most plausible and persuasive form. – J. S. Mill, On Liberty.

http://s17.postimg.org/txvs1hvdr/opposing_perspectives.png
So, when we are assessing the arguments of other scientific studies, we first try to do the best we can to leave our own opinions by the door… With this in mind, among the climate science community, the current Global Climate Models (GCMs) are widely believed to be the best tools we have for understanding past and future climate trends. So, while I may disagree with that, I can respect that others think that, and therefore, analysing the GCM results is of relevance…

Reply to  richard verney
October 15, 2015 2:33 am

As a follow on to my comment to Richard above (currently in moderation due to the multiple links???), some of you may be interested in seeing exactly how the datasets used for “natural and anthropogenic forcings” in the current Global Climate Models (GCMs) translate into their projections and hindcasts for global temperature trends. This is particularly relevant for understanding the significance of the choices of TSI datasets.
Below are the different “forcings” used by NASA GISS for the hindcasts they submitted to the previous IPCC reports, i.e., the 2007 AR4 reports (CMIP3 instead of CMIP5):
http://s24.postimg.org/ix0azkuf9/Rad_F.gif
The TSI dataset they used (“Solar Irradiance”) was Lean, 2000 (which is an earlier version of Wang et al., 2005 and a later version of Lean et al., 1995).
Below is the “Net Forcings”:
http://s9.postimg.org/d7xzp1inz/Net_F.gif
And below, is the results of their hindcasts (which I have adapted from Figure 6 of their Hansen et al., 2007 paper):
http://s18.postimg.org/442a9ogeh/Hansen2007_Fig6b_adapted.jpg
The hindcast graph only runs up to 2003, but the two “forcings” graphs have been updated to 2012 – downloaded from here: http://data.giss.nasa.gov/modelforce/
But, notice that the shape of the global temperature trends (“surface temperature anomaly”) is very closely related to the shape of the “Net Forcings” plot. In other words, despite the complexity and heavy computational expense of the current models, the “global temperature trends” that they produce are almost directly proportional to the sum of the “external forcings” datasets you plug into them!
This is in keeping with Marcia Wyatt’s recent guest post at Judith Curry’s blog (http://judithcurry.com/2015/10/11/a-perspective-on-uncertainty-and-climate-science/) where she criticised the current models and the IPCC for assuming global temperature trends are dominated by external forcing and that internal variability is basically negligible
At any rate, this means that effectively the vast majority of the “natural variability” modelled by the current GCMs consists of (a) their “stratospheric volcanic cooling” dataset and (b) their TSI dataset. For the CMIP5 hindcasts used for the latest IPCC reports, while several different datasets were suggested for volcanic cooling, all of the groups were strongly recommended to just consider Wang et al., 2005 for TSI.
Since the volcanic forcings lead to “cooling”, and the long-term trends arising from “internal variability” are almost negligible in most GCMs, basically the only mechanism for “natural warming” in the CMIP5 hindcasts was the slight increase in TSI from the late 19th century up to the late-1950s implied by the Wang et al., 2005 TSI reconstruction…

Reply to  richard verney
October 15, 2015 2:46 am

Mods: I tried posting this comment a few minutes ago but it seemed to get eaten without even an “awaiting moderation” note. Then when I tried reposting, I was told I’d already posted this. I’ve removed a file extension in the original comment (which I suspect may be to blame?), but could you remove any several duplicates if there are any?
Richard,
This is an important point, and it is something we are looking to investigate in follow-on research. There are 3 separate, but complementary, approaches we have been considering:
1.) All of the CMIP5 output data can be downloaded from the CMIP5 website once you have created an account: http://cmip-pcmdi.llnl.gov/cmip5/data_getting_started.html
Most of this data is in NetCDF format, and so you would probably need to be (or to become!) familiar with handling data in this format. But, in principle, with a bit of work, it should be fairly straightforward to extract the hindcast trends for not just Northern Hemisphere land, but also each of the four regions we analysed in our paper. This would allow a more direct comparison of the CMIP5 hindcasts with our new mostly-rural hemispheric (and regional) temperature reconstructions.
However, one of our main conclusions in this paper is that the CMIP5 hindcasts only considered a small selection of the available datasets – even though the “detection and attribution” studies which were based on these hindcasts were the primary justification for the widely-cited IPCC claim that global temperature trends since the 1950s are mostly anthropogenic in origin. So, perhaps a more productive approach would be to carry out fresh hindcasts in a more systematic and comprehensive manner.
2.) So, another approach would be if one (or more) of the CMIP5 groups were to carry out new “detection and attribution” studies, taking into account the discussions in our paper. We have been contacting some of the CMIP5 groups to see if this is something they might be interested in.
3.) Some of the CMIP5 groups have provided publicly downloadable versions of their climate models which can be run at home on a Linux box (for instance).
e.g., NASA GISS: http://www.giss.nasa.gov/tools/modelE/
e.g., NCAR’s “Community Earth System Model” (CESM): https://www2.cesm.ucar.edu/models/current
Obviously you couldn’t run the models with as high a resolution as the groups can on their in-house dedicated hi-spec computer servers, but if you have (or can get) a cheap Linux box (or several) that you don’t mind setting aside to run simulations for a few months, and you are less ambitious in how computationally expensive you need your simulations to be, you should be able to get results that could be useful. To get an idea of how much computer time is required, typically the CMIP5 groups allowed ~2-3 CPU weeks for each run.
Having said all of that, I should point out that, if you’ve read our blog and/or our “Physics of the Earth’s atmosphere” papers, you will probably realise that Michael and I believe there are serious problems with how the current climate models treat energy transmission throughout the atmosphere. As a result, in our opinion, the current climate models dramatically overestimate the influence that infrared-active gases (i.e., greenhouse gases) have on atmospheric temperature profiles.
So, we don’t think that the current models are very useful in understanding past (and future!) temperature trends. We are of the opinion that focusing more on empirical observations and less on climate model outputs is probably a more effective approach to understanding the climate system.
But, Michael & I are both fans of Konrad Lorenz’ “morning excercise” regime for research scientists ;),

It is a good morning exercise for a research scientist to discard a pet hypothesis every day before breakfast: it keeps him young – (Konrad Lorenz, 1903-1989)

and also we are conscious of John S. Mill’s observations in his On Liberty book,

He who knows only his own side of the case knows little of that. His reasons may be good, and no one may have been able to refute them. But if he is equally unable to refute the reasons on the opposite side, if he does not so much as know what they are, he has no ground for preferring either opinion… Nor is it enough that he should hear the opinions of adversaries from his own teachers, presented as they state them, and accompanied by what they offer as refutations. He must be able to hear them from persons who actually believe them…he must know them in their most plausible and persuasive form. – J. S. Mill, On Liberty.

http://s17.postimg.org/txvs1hvdr/opposing_perspectives.png
So, when we are assessing the arguments of other scientific studies, we first try to do the best we can to leave our own opinions by the door… With this in mind, among the climate science community, the current Global Climate Models (GCMs) are widely believed to be the best tools we have for understanding past and future climate trends. So, while I may disagree with that, I can respect that others think that, and therefore, analysing the GCM results is of relevance…

Editor
Reply to  Ronan Connolly
October 12, 2015 5:26 am

Excellent Ronan! +1

Gary Pearse
October 12, 2015 9:03 am

Assuming deltaTSI is the dominant ‘forcing’, it would be helpful if it were possible to have a function for increased and decreased albedo in the NH fall to spring inclusive due to changes in snow cover to augment the increase and decrease caused by delta TSI. Check delta snow cover vs delta TSI

October 15, 2015 2:34 am

There seems to be some confusion over the debate over the various Sunspot Number (“SSN”) datasets (i.e., the Zurich dataset vs. Hoyt & Schatten, 1998 vs. Clette et al., 2014) and the related, but distinct, debate over the various Total Solar Irradiance (TSI) reconstructions.
Perhaps this is partly due to the fact that Hoyt & Schatten were actively involved in both debates:
Hoyt & Schatten, 1998 describes an alternative SSN dataset based on “group numbers” to the conventional Zurich dataset.
Hoyt & Schatten, 1993 describes a TSI reconstruction that is based on multiple solar proxies. It is correct that one of these solar proxies is a smoothed version of the Zurich SSN, but they used several non-SSN proxies. Interestingly, as we note in Section 2.2.1, one of the non-SSN proxies they use actually seems more closely related to a smoothed version of the Clette et al., 2014 SSN than their smoothed Zurich SSN…
But, another factor seems to be that several of the recent TSI reconstructions are very heavily dominated by the sunspot number datasets. Indeed, Leif (“lsvalgaard”) has argued elsewhere that (as an approximation) we can simply assume an exact linear relationship between TSI and SSN (provided we use his Clette et al., 2014 dataset):
lsvalgaard May 8, 2015 at 4:51 pm

Sunspots and F10.7 are both good proxies for the energy output of the sun. I don’t need to prove that, because that is generally accepted.
Equations are easy: Total energy (actually power W/m2) = 1360.5 + 0.083 * Sunspot number
Relative Change in Temperature = one forth the relative change of Total energy received

However, as we discuss in detail in Section 2, while the sunspot cycles definitely seem to describe a major aspect of solar variability, and it is tempting to assume a linear relationship between SSN and TSI (it would make things much simpler), there is considerable evidence that the situation may unfortunately be more complex than that.
Certainly there is a lot of literature and several prominent solar researchers (including Leif) who have argued strongly in favour of the assumption that there is a linear relationship between SSN and TSI. However, we also found a lot of literature highlighting potential problems with that assumption. We tried to summarise the literature and data from all perspectives as objectively as we could, in such a way that the reader would be aware of the pros and cons for both arguments.
With that in mind, it may be helpful to summarise the main components of each of the TSI reconstructions in Figure 8 of our paper:
http://s10.postimg.org/6bulj1gc9/Fig_08_double_column_all_8_TSI.jpg
These are some of the most recent TSI reconstructions we were able to find, and includes all 4 of the TSI reconstructions used for the CMIP5 hindcasts, i.e., the four on the right hand side. The reason why some of the CMIP5 groups used different reconstructions than Wang et al., 2005 was that Wang et al., 2005 only goes back to 1610 (since it’s basically just a slightly modified version of the group sunspot record). Some of the CMIP5 groups were also contributing to the PMIP3 project which hindcasted temperatures of the last millennium. So, some of these groups used one of the other three longer reconstructions – either instead of, or to extend the Wang et al., 2005 reconstruction.
For more details on each of the reconstructions, see Section 2.2.4 of the paper (and the references themselves, of course!). But, in summary, 4 of the reconstructions are primarily based on sunspot number records (specifically, Hoyt & Schatten’s group sunspot dataset). These are Lean et al., 1995; Wang et al., 2005; Krivova et al., 2007 and Vieira et al., 2011. 2 of the reconstructions are derived from cosmogenic isotope records: Steinhilber et al., 2009 and Bard et al., 2000. The Shapiro et al., 2011 reconstruction used both sunspot numbers and cosmogenic isotopes. They used sunspot records to describe the “high frequency variability” and the cosmogenic isotopes for describing the “low frequency variability”. The Hoyt & Schatten, 1993 reconstruction used a composite of multiple solar proxies. Some of these were derived from the sunspot records (ironically, this was the only one of the above that used the original Zurich dataset – not surprising since they hadn’t developed their popular 1998 group sunspot dataset at this stage!). However, they also used several different solar proxies. Their rationale was that each of these proxies could be capturing a different aspect of solar variability, and that if we want to estimate the true TSI trends it is important to consider the many different aspects of solar variability.
As can be seen from Figures 4 and 5 of our paper, while most of the solar proxies suggest fairly similar trends, there are some subtle, but often non-trivial, differences between each of them.
I posted these already in an earlier comment, but here’s Figure 4 from our paper:
http://s16.postimg.org/nmifoqn6d/fig_4_long_solar_proxies.jpg
And here’s Figure 5:
http://s3.postimg.org/t3u5fe2f7/fig_05_short_solar_proxies.jpg
P.S. Some commenters have wondered how the three different sunspot number datasets would influence TSI trends since the late 19th century in the various TSI reconstructions. It may be helpful to carefully compare Figures 4(a)-(c) and also (d) in the above figures for the period 1880-2014 which our paper focused on. Certainly, there are some important differences between the three datasets. And this is particularly relevant for those TSI reconstructions which primarily relied on sunspot numbers. However, in our opinion, these are of more importance for comparing the early and late periods, and are not as major for the 1880-2014 trends of the TSI reconstructions. Others may disagree…