Gavin Schmidt on solar trends and global warming

I really wish Gavin would put as much effort into getting the oddities with the GISTEMP dataset fixed rather than writing coffee table books and trying new models to show the sun has little impact.

This paper gets extra points for using the word “robust”.  – Anthony

Benestad-schmidt-fig2

Solar trends and global warming (PDF here)

R. E. Benestad

Climate Division, Norwegian Meteorological Institute, Oslo, Norway

G. A. Schmidt

NASA Goddard Institute for Space Studies, New York, New York, USA

We use a suite of global climate model simulations for the 20th century to assess the contribution of solar forcing to the past trends in the global mean temperature. In particular, we examine how robust different published methodologies are at detecting and attributing solar-related climate change in the presence of intrinsic climate variability and multiple forcings.

We demonstrate that naive application of linear analytical methods such as regression gives nonrobust results. We also demonstrate that the methodologies used by Scafetta and West (2005, 2006a, 2006b, 2007, 2008) are not robust to

these same factors and that their error bars are significantly larger than reported. Our analysis shows that the most likely contribution from solar forcing a global warming is 7 ± 1% for the 20th century and is negligible for warming since 1980.

Received 17 December 2008; accepted 13 May 2009; published 21 July 2009.

Citation: Benestad, R. E., and G. A. Schmidt (2009), Solar trends and

global warming, J. Geophys. Res., 114, D14101,

doi:10.1029/2008JD011639.

hat tip to Leif  Svalgaard

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

386 Comments
Inline Feedbacks
View all comments
Ron de Haan
July 23, 2009 6:01 am
July 23, 2009 6:18 am

tallbloke (01:25:47) :
Why would a careful man like Wolf get it right in 1848 [you had ‘1858’?] but get it wrong at other times? It looks like 1875 was correctly counted too, so why would the intervening low cycle be under counted?
Here you can follow the ‘evolution’ of the Wolf Number:
http://www.leif.org/research/Evolution%20of%20the%20Wolf%20Number.png
As you can see, it has a history of constant tinkering and adjustments. The years 1857, 1861, 1874, 1882 refer to the years when Wolf actually published revised versions of his table.

July 23, 2009 6:49 am

Bill Illis (05:59:24) :
Leif, so generally we should assume the Sun itself is varying by a very small amount over recent time-scales.
Recent meaning 10,000 years, yes.
But the Milankovitch cycles are still in operation resulting in a changing level of the TSI received by the Earth (at least on a seasonal scale).
Yes, and are probably responsible for the glaciations. But one must no conflate that with variations of solar activity and the Sun itself.

Gail Combs
July 23, 2009 7:01 am

tallbloke:
Sneak preview:
http://s630.photobucket.com/albums/uu21/stroller-2009/?action=view&current=ssnc-pdo-amo.gif
Great graph but what do the red and blue lines represent? HCUT and ??? The graph is more useful if an expanded explanation is included with the graph.
Thanks

Joel Shore
July 23, 2009 7:10 am

jmc says:

According to the Second Law of Thermodynamics, it is only possible to transfer heat from a cold medium to a hot medium by supplying external work to the system. If the cold upper atmosphere could heat the warmer ground without some input of external work, then the Second Law would be violated.

The Second Law says that the NET flow of heat is from the warmer to the colder body and not the other way around. And indeed, if you work it out, there is more heat flowing from the earth to the upper atmosphere than flowing from the upper atmosphere to the earth. The reason that the earth still ends up warmer than it would be if there were no greenhouse gases (and hence the atmosphere were transparent to IR radiation) is that in that case, all of the radiation emitted by the earth would escape into space with none of it coming back. The role of the IR-absorbing atmosphere is to send some of it back but it still sends back less than it receives.

tallbloke
July 23, 2009 7:13 am

Leif Svalgaard (06:01:14) :
Alec Rawls (02:10:44) :
We all know that surface temperature variations on the time scale of the solar cycle are dominated by ocean oscillations, so the last thing anyone would expect is that the solar cycle would be clear in temps.
This is what I’m saying: the empirical evidence is not there. One can believe in the mechanism and try to justify why the evidence is hidden in the deep ocean and all that may be correct, but that is different from saying that there are clear correlations with temps.

Fair enough, but as I said to Alex, it’s not all hidden in the deep ocean, some of it gets lost in the averaging of temps across the cycle, because el nino is a deeper ocean response to solar min and the nearer surface heating which took place six years earlier at solar max.
Now to prove it… You’re good at goading people onwards I’ll give you that. 😉

tallbloke
July 23, 2009 7:23 am

Gail Combs (07:01:08) :
tallbloke:
Sneak preview:
http://s630.photobucket.com/albums/uu21/stroller-2009/?action=view&current=ssnc-pdo-amo.gif
Great graph but what do the red and blue lines represent? HCUT and ??? The graph is more useful if an expanded explanation is included with the graph.
Thanks

I thought no-one would ever ask, thank you.
HADcru sea surface temp in red. The blue curve is my calculation from sunspot numbers (as a proxy for TSI) of the accumulation of ocean heat content during times of higher solar activity (Most of C20th) and loss during solar minimum and times of low solar activity.
The obvious gaps between the red and blue curves are what I’m working on at the moment, and I believe I just found the answer I was looking for. I got a clue the other day when someone posted a graph of the troposphere using the amsu raw data, which shows 0.4C swings in global air temperature on a monthly basis.
More revealed soon, back to work.

Joel Shore
July 23, 2009 7:29 am

David says:

1) Why is the climate sensitivity figure static? It was determined by Hansen using figures he derived from the last glacial maximum (@0.75°C). What reason is there to think our climate would act the same as the climate then? To visualize, why would it be X=0.75? There could be many different things that affected climate that looks like XY=0.75. If you are solving a million year old problem, and leave out one thing, you have a Y in your equation, and it must be accounted for.

First of all, it is not a million-year-old problem. The last glacial maximum was only 15,000 or so years ago. So, we have very good ice core data giving us the concentrations of greenhouse gases and also good proxies for the temperature in the ice core and in other places (like ocean sediments). We also know the orbital parameters of the earth back then. And, things that change on a geologic timescale, such as the locations of continents and mountain ranges, are essentially the same as they are now.
The estimate of the climate sensitivity is derived by including all of the effects that we know about (difference in orbital parameters, change in albedo due to the presence of ice sheets, change in greenhouse gas concentrations, and change in aerosol loading in the atmosphere). One can never really say for sure in science that one is not leaving something out…but I don’t know of any serious proposals of what the missing forcing could be (and it would have to be large compared to these other forcings in order to significantly alter the conclusion).
All of our understanding of the feedbacks in the climate system suggests that, while climate sensitivity might change somewhat as the climate changes, it will not change sufficiently rapidly that this estimate from the Last Glacial Maximum to now won’t be a good approximation of what would happen as you warm from the current climate. The one thing one could in principle argue would change is that there is less ice to melt now as there was then; however, as Jim Hansen has pointed out, this estimate of the climate sensitivity as being 0.75 C / (W/m^2) considers the change in albedo due to the ice sheets to be a forcing not a feedback…And, so Hansen argues that in our present “experiment” (where any such change would be considered to be a feedback not a forcing) should actually have a larger climate sensitivity, at least on timescales long enough that the disintegration of land ice on Greenland and West Antarctica can occur. [Hansen calls the 0.75 C / (W/m^2) value the Charney sensitivity and argues that the full sensitivity including the ice sheet feedbacks could be about double this, although some other scientists seems to be skeptical that it will be that much higher.]

Joel Shore
July 23, 2009 7:33 am

Carl Chapman says:

He’s using “global climate model simulations for the 20th century to assess the contribution of solar forcing”. At least he’s admitting what he’s doing. Does he think that checking the models is how you find out about the real world?

As I noted in my post from 10:06:48 on 22 July, they don’t just use climate model simulations, they also look at the actual data. In particular, using the climate model simulations allows them to test different data analysis methods for extracting the contributions due to each forcing from the temperature record in a case where you know the “right answer” (because you can run the climate model with various forcings turned on or off).

tallbloke
July 23, 2009 7:38 am

Leif Svalgaard (05:57:07) :
On PMOD dropping off the chart: I have posted on this before, it is due to a calibration error of PMOD [how to compensate for degradation].

It’s the degradation that occurs as soon as the sensors are put into service that I think may hide the missing 3/4 of the solar forcing. My rough estimate is that a 0.3% error in calibration would cover the gap. I won’t argue it with you now, because I don’t know enough, but the oceans were getting an additional 4W/m^2 during 1993-2003, and the present estimate of TSI only accounts for around ~0.45W/m^2(PMOD) or ~0.9W/m^2(ACRIM) of the elevation of solar cycles 22-23 above the level where the oceans start to gain heat.
A good chunk of it may come from Nir Shaviv’s terrestrial amplification of around 8x, he suspects it’s clouds. I’m not sure, but I do know it’s coming from somewhere, and since downwelling longwave doesn’t heat the ocean, it ain’t co2.

Jim
July 23, 2009 7:46 am

Joel Shore (07:10:01) : “The role of the IR-absorbing atmosphere is to send some of it back but it still sends back less than it receives.”
Then why do the recent satellite global temp data show cooling or at worse flat over the past several years?

timetochooseagain
July 23, 2009 7:48 am

Such a shame how this thread has degenerated into those who want to question trivial points about AGW that are probably correct but not alarming, and the advocates who want to defend those points because it is easy, and they can misdirect that the point they can make points to alarm…

July 23, 2009 7:52 am

tallbloke (07:38:45) :
It’s the degradation that occurs as soon as the sensors are put into service that I think may hide the missing 3/4 of the solar forcing.
The problem with this is that one tries to compensate for the degradation. There are several ways of doing this. The basic principle is this: have several identical sensors, one that measures all the time, one that measures half of the time, one that measures very rarely [e.g. once a month]. When not measuring, the sensor window is shut and no degradation occurs, thus the sensors have different degradation as a function of how often they measure. This allows the degradation to be plotted as a function of exposure time. Extrapolate the curve to zero exposure time and you have overcome the degradation.
Another way [SORCE] is to compare with a selection of non-varying stars. So degradation is supposedly under control, but it is still a difficult measurement.

David
July 23, 2009 7:59 am

What about sea ice during the last glacial maximum? Ocean processes were the same?

July 23, 2009 8:00 am

James Hansen picks a climate sensitivity number that is large enough to allow him to argue that climate catastrophe is right around the corner. But Hansen also lies; he encourages lawbreaking, he manipulates the GISS numbers, he constantly changes history, and he takes huge amounts of money from individuals and organizations that are promoting an AGW agenda, despite what the real world shows.
It is wrong for the government to continue employing anyone so blatantly unethical, but there is nothing I can do about that. What I can do is refuse to accept Hansen’s artificially jacked up climate sensitivity number.
For all anyone knows, climate sensitivity could be zero. Prove it isn’t.

Jim
July 23, 2009 8:14 am

Joel Shore (07:33:50) : “As I noted in my post from 10:06:48 on 22 July, they don’t just use climate model simulations, they also look at the actual data.”
Is GISS used for this?

Nogw
July 23, 2009 8:36 am

Where could we get a greenhouse farmer using an open greenhouse?

Tim Clark
July 23, 2009 8:47 am

Smokey (08:00:07) :
For all anyone knows, climate sensitivity could be zero. Prove it isn’t.

Smokey, you’re good at getting papers, do you have this one.
How declining aerosols and rising greenhouse gases forced rapid warming in Europe since the 1980s
Rolf Philipona
Federal Office of Meteorology and Climatology MeteoSwiss, Aerological Station, Payerne, Switzerland
Klaus Behrens
Meteorologisches Observatorium Lindenberg, Deutscher Wetterdienst, Lindenberg, Germany
Christian Ruckstuhl
Institute for Atmospheric and Climate Science, ETH Zurich, Zurich, Switzerland
Mainland Europe’s temperature rise of about 1°C since the 1980s is considerably larger than expected from anthropogenic greenhouse warming. Here we analyse shortwave and longwave surface forcings measured in Switzerland and Northern Germany and relate them to humidity- and temperature increases through the radiation- and energy budget. Shortwave climate forcing from direct aerosol effects is found to be much larger than indirect aerosol cloud forcing, and the total shortwave forcing, that is related to the observed 60% aerosol decline, is two to three times larger than the longwave forcing from rising anthropogenic greenhouse gases. Almost tree quarters of all the shortwave and longwave forcing energy goes into the turbulent fluxes, which increases atmospheric humidity and hence the longwave forcing by water vapour feedback. With anthropogenic aerosols now reaching low and stable values in Europe, solar forcing will subside and future temperature will mainly rise due to anthropogenic greenhouse gas warming.
Received 15 October 2008; accepted 5 December 2008; published 20 January 2009.
Citation: Philipona, R., K. Behrens, and C. Ruckstuhl (2009), How declining aerosols and rising greenhouse gases forced rapid warming in Europe since the 1980s, Geophys. Res. Lett., 36, L02806, doi:10.1029/2008GL036350.

timetochooseagain
July 23, 2009 8:48 am

Smokey (08:00:07) : I’m not one to agree with Hansen’s numbers but come one “climate sensitivity could be zero”? That’s REALLY impossible (it would mean that no amount of perturbation of the climate system could ever cause any change at all, not even a small amount. That is plainly falsified by the geological record showing rather large changes. That does NOT mean that those changes correspond to large sensitivities (Shore et al’s argument that it does is BS) but it does rule out zero.

africangenesis
July 23, 2009 8:56 am

Another interesting disclosure from the Benestad and Schmidt paper is that the model climate appears to be in a different mode at CO2 doubling. Note the much lower transient climate sensitivity at the time of CO2 doubling:
“The equilibrium climate sensitivity of GISS ModelE is 2.7C for a doubled atmospheric concentration of CO2, whereas the transient response at the time of CO2 doubling in a 1% increasing CO2 experiment is 1.6C”
I hypothesize that the higher 2.7 C figure is an erroneous artifact due to the positive surface albedo bias found in all the AR4 models by Andreas Roesch. He investigated the well known high lattitude problems that the models had matching the observations of the 1990s and also focused on the surface albedo in general. Most of the models albedo bias was due to a delayed spring snow melt and large snow cover area, larger snow cover fractions and due to poor parameterization of the shadows cast on the snow by darker tree “stems’. The correlated positive surface albedo bias in found in all the AR4 models when globally and annually averaged, translates to between 3 and 4 Watts/m^2. This is energy present in the actual climate, but not in the models. But the models were advancing their snow melts with increasing warming, just not as fast as was actually happening in the decade of the 90s. So models that are claimed to “match” the 20th century warming, will eventually, over the course of their projections, and the CO2 doubling sensitivity runs, will catch up and add over 3 W/m^2, in addition to whatever increase in forcing there is from their CO2 scenerios. This is energy that should have been in their climates already. If i recall correctly, the model that Gavin Schmidt is using in this paper, had a surface albedo bias that was better than the average for all the models, so the effect might not be quite as dramatic. I would not be surprised if it were still large enough to explain the discrepency between the CO2 doubling sensitivity and the transient sensitivity in the climate at the doubling time.
To give the 3W/m^2 some perspective, the energy imbalance during the 1998 en nino year was only about 0.75W/m^2.
http://www.agu.org/pubs/crossref/2006/2005JD006473.shtml

Tim Clark
July 23, 2009 8:56 am

Smokey (08:00:07) :
Or this one.
Atmospheric water vapor and surface humidity strongly influence the radiation budget at the Earth’s surface. Water vapor not only absorbs solar radiation in the atmosphere, but as the most important greenhouse gas it also largely absorbs terrestrial longwave radiation and emits part of it back to the surface. Using surface observations, like longwave downward radiation (LDR), surface specific humidity (q) and GPS derived integrated water vapor (IWV), we investigated the relation between q and IWV and show how water vapor influences LDR. Radiation data from the Alpine Surface Radiation Budget (ASRB) network, surface humidity from MeteoSwiss and GPS IWV from the STARTWAVE database are used in this analysis. Measurements were taken at four different sites in Switzerland at elevations between 388 and 3584 m above sea level and for the period 2001 to 2005. On monthly means the analysis shows a strong linear relation between IWV and q for all-sky as well as for cloud-free situations. The slope of the IWV-q linear regression line decreases with increasing altitude of the station. This is explained by the faster decrease of IWV than of q with height. Both q and IWV are strongly related with LDR measured at the Earth’s surface. LDR can be parameterized with a power function, depending only on humidity. The estimation of LDR with IWV has an uncertainty of less than 5% on monthly means. At lower altitudes with higher humidity, the sensitivity of LDR to changes in q and IWV is smaller because of saturation of longwave absorption in the atmospheric window.
Received 28 July 2006; accepted 27 September 2006; published 2 February 2007.
Citation: Ruckstuhl, C., R. Philipona, J. Morland, and A. Ohmura (2007), Observed relationship between surface specific humidity, integrated water vapor, and longwave downward radiation at different altitudes, J. Geophys. Res., 112, D03302, doi:10.1029/2006JD007850.

africangenesis
July 23, 2009 9:35 am

timetochooseagain,
The climate sensitivity couldn’t be zero, but there is enough uncertainty, that the net feedback could be close to zero or even negative, even though all the models have the feedback as positive. The unknowns in just the tropical cloud cover dwarf the phenomenon of interest. There are probably 10s of W/m^2 of error, when the energy imbalance thought responsible for the warming is less than 1W/m^2. The correlated model surface albedo bias i discussed above is already on the order of 4 times the energy imbalance. Unfortunately correlated error defeats part of the purpose of combining model ensembles, the hoped for cancellation of random errors. All of the AR4 models are correlated in having the net tropical cloud feedback as positive. We don’t have the kind of observations that can resolve the issue, although recently published work (by Christy, et al?) suggests that the net feedback might actually be negative.

Gerald Machnee
July 23, 2009 9:47 am

RE: Joel Shore (07:33:50) :
**As I noted in my post from 10:06:48 on 22 July, they don’t just use climate model simulations, they also look at the actual data. In particular, using the climate model simulations allows them to test different data analysis methods for extracting the contributions due to each forcing from the temperature record in a case where you know the “right answer” (because you can run the climate model with various forcings turned on or off).**
That is one of the problems – the “right answer” may not be correct. As in they “Know” CO2 along with “positive” forcing is responsible for most of the heat gain.

CodeTech
July 23, 2009 10:12 am

Hmmm – 321 comments… this is why I like WUWT.
Thing is, the planetary atmosphere has many ways to achieve thermal equilibrium. We’ve observed both heating and cooling mechanisms, and even non-science oriented people comprehend the concept of oscillation. Unless someone plants a sun behind us, we will always have a reliable heat source on the day side and a reliable sink on the night side. Overall, what leaves is what arrives. This planet has an overall temperature that is dependent on 1AU around this specific star. It doesn’t matter WHICH mechanism is in play, there will only ever be one temperature range.
It will always amuse me that some people so fervently believe that one small factor can possibly bump the entire system out of equilibrium. Whatever did this poor planet do in years past when REAL problems hit it, like giant projectiles, huge volcanoes, different atmospheric make up, etc.?
Heck, I’m sitting in an air conditioned office that is alternately just slightly cooler than I’m comfortable with and slightly warmer. Every time it’s slightly away from where I want it to be, I don’t delude myself into thinking I need to run over to the thermostat and mess around with it, and I’ve never once felt the need to chart the temperature trend using straight lines that continue indefinitely either upward or downward, and I’m definitely not going to kill my personal finances by calling in an HVAC company to replace the A/C unit. Although, if I really stretch it, I could probably begin charting temperatures in here, altering the past record to show a misfunctioning HVAC system, and convince the building owner that he needs to replace it…
Forgive my musings here, but exactly what has to be wrong with your logic to believe that a trace gas has any hope at all of causing “runaway” anything, other than runaway: spending, speculation, name-calling, argument, strife, economic suicide, etc.?

Ron de Haan
July 23, 2009 11:01 am

CodeTech (10:12:17) :
“It will always amuse me that some people so fervently believe that one small factor can possibly bump the entire system out of equilibrium”.
CodeTech,
The problem is ignorance and plain stupidity high jacked by politics.
I think it’s hardly amusing, it’s embarrassing.