The Transient Climate Response (TCR) revisited from Observations (once more)

Guest essay By Frank Bosse

In a recent blog post at Dr. Judith Curry’s website the author Nicholas Lewis analyzes the climate sensivity from observations and concludes a TCR of about 1.33 which is very stable versus different periods (see Table 1 of the linked post).

Here I want to use a slightly different method and another temperature record, the Cowtan/Way (C/W) http://www-users.york.ac.uk/~kdc3/papers/coverage2013/series.html ) series. In the discussions of the post at Judy Currys website there were big “?” if the result of TCR would also stand if one uses the “land- infilled” Data mostly for the polar regions of the earth. Therefore I’ll use this record to show the difference to HadCRUT4 (that was used by N. Lewis in his calculations) in the output.

I investigate the span 1940…2015. This includes the latest increase of the Global Mean Surface Temperature (GMST) , see Fig. 1, and avoids the periods of temperature data with great uncertainty in the early years of the observations.

image

Fig.1: The GMST anomalies (GMSTA) following the record of C/W for 1940…2015.

The Forcing-data I take from the IPCC AR5 appendix (Tab. A II 1.2 https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_AnnexII_FINAL.pdf ) This record ends in 2011. For the time span 2012…2015 I calculated the CO2-forcing with the observed concentration data and the other forcings I extrapolated for a difference (the sum of forcings) of 0.25 W/m² between 2011 and 2015.

I excluded every volcano forcing because the few events during 1940…2015 were all before 1991 and would insert a bias.

One of the biggest pitfalls in the forcing data is the magnitude of the aerosol-forcing. In an actual paper http://journals.ametsoc.org/doi/pdf/10.1175/JCLI-D-14-00656.1 the author Björn Stevens made some thoughts about a required reducing of it. N. Lewis could show https://climateaudit.org/2015/03/19/the-implications-for-climate-sensitivity-of-bjorn-stevens-new-aerosol-forcing-paper that the downscaling, which is implicid in the Stevens-paper, is about 50% (see the appendix in the “Climate Audit”-Post). This makes some difference in the total forcing:

image

Fig. 2: The total forcing (but for volcano) with Aerosol-forcing as calculated in AR5 (black) and reduced by 50% (magenta) as implicid suggested by Stevens (2015).

To avoid a “single study syndrome” I’ll calculate the TCR for both cases: With and without the Aerosol-Forcing reduction.

For the detection of the TCR I calculate a linear regression (least square) for the forcings of every year 1940 to 2015 versus the observed temperatures (annual means):

image

Fig. 3: Regression of Forcing vs. temperature anomalies. The forcings account for 78% of the variance of the GMSTA.

The slope of the trend line in Fig.3 stands for the observed relation of a forcing-change of 1W/m² to the GMST-change from this.

It’s 0.37K/(W/m²) for the unchanged aerosol-forcing as shown in Fig.3, the reduction included gives a slope of 0.32 K/(W/m²), not shown.

Let’s take a look at the “rest” which is not explained by the forcings, excluded volcano. The residuals between the linear regression slope and the observed GMSTA over the time:

image

Fig. 4: The residuals between forcings and observations with a 15year-smoothing (Loess).

Fig. 4 shows the natural variability like the ENSO-events 1998/2000 and the volcanic eruptions, such as in 1992/1993. There is also a low-frequency pattern as the low-pass in fig.4 shows. I want to compare it with the AMO-pattern as it’s described in a modern record suggested by v. Oldenborgh et al. (2009) http://dspace.library.uu.nl/bitstream/handle/1874/43930/os-5-293-2009.pdf?sequence=2

image

Fig. 5: The AMO-Index, see the rapid shift in the 90s.

The AMO seems to be a part of the internal variability as it’s pattern is well replicated in the Fig. 4. The amplitude of the impact of this index on the GMST is about 0.2K and a shift from negative to positive arose also during the years between 1976 and 2005 from which we know that many models were “tuned” ( see Mauritsen et al. 2012 http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/full ).

The models don’t replicate the AMO, they don’t “know” of it.

Finally let’s have a look at the TCR-values. It’s well known that a doubling of the GHG-concentrations will lead to a forcing of 3.71W/m². This gives for the discussed trend slopes:

1. TCR 1940…2015 for full aerosol-forcing: 1.39 K/(2*CO2)

2. TCR 1940…2015 for reduced aerosol-forcing: 1.19K/(2*CO2)

3. TCR 1976…2005 (Model “tuning” span): 2.3 K/(2*CO2)

Conclusions:

The results of N. Lewis for TCR (1.33 with full aerosol-forcing and 1.22 for reduced aerosol forcing) are confirmed with only a deviation of 4% for another temperature record.

The residuals show a clear AMO-like pattern which is an essential part of the internal variability.

If one ignores this pattern one gets unrealistic high TCR-values greater than 1.8.

Many GCM do so, see Forster et al. (2013) http://onlinelibrary.wiley.com/doi/10.1002/jgrd.50174/full .

0 0 votes
Article Rating
150 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alex
May 11, 2016 1:12 am

‘Forcing’. What an amusing term. I have only come across it in the pseudo science called climate science

Reply to  Alex
May 11, 2016 1:32 am

It is a programmer’s delight, allowing, for example, the kitchen sink to be included in GCMs provided one can be barefaced enough to come up with said sink’s effect on the model’s TOA ‘radiation budget’. That ‘effect’ is then added, and turned into a ’cause’ under which the model’s subsequent gyrations are to be observed, pampered, tampered, edited, and screened until something deemed sensible comes out for the benefit of the grant-giving general public. It looks a bit rubbish as a business model, but my goodness it has been extremely successful for decades.

Reply to  Alex
May 11, 2016 7:35 am

YES !
What has bothered me since I got diverted into this stagnate nonscience
http://cosy.com/Science/AGWppt_UtterStagnationShavivGraph.jpg
is the near total lack of even curiosity about the solid fundamental physical quantitative analytical foundation one expects in any other field of applied physics . It’s as if the subject were as physically unknowable as which field of hybrid maize would grow best which was the sort of problem which led R A Fisher to figure out so many of the statistical procedures which are taken as “explanations” in this field .
It’s not . It is an issue of physics . And very few appear , literally , to know how to calculate the temperature of a billiard ball under a sunlamp . Yet they propose to explain the 4th decimal place variations in our estimated mean temperature over our radiantly heated sphere without that understanding . That’s not explanation ; that’s curve fitting .
I saw it before in grad school in visual psychophysics . The subfield I was in happily churned their grants experimenting whether there were “Fourier detectors” in the visual system without really learning the underlying math which showed that no external experiment could resolve the neural basis beyond similarity classes . So when I made the error of sitting in on too many math classes and understanding that ( and learning APL so I could show it was true of classical experimental data ) my path to a PhD was over .
In this field , basic calculations show that the temperature gradients between tops of atmospheres and bottoms cannot be explained ( particularly in the extreme case of Venus ) by spectral phenomena . Yet the paradigm persists with the questions is the “forcing” high or low or whatever with no fundamental quantitative definition of “forcing” in terms of classical physical heat transfer equations .
The only effect green house gases can have on our mean temperature is their effect on our spectrum as seen from the Sun . And by Beer’s law , that is de minimis .

Reply to  Alex
May 11, 2016 10:54 am

Its not that hard to understand.
In its idealized state the earth system is in “equillibrium” energy out and energy in balance.
perturbations that drive a system out of balance are called forcings. you apply a force
amd the system goes out of balance.
Its too funny. the first time I worked in radar cross sections I was really upset because people used
strange terminology like dbsm. how could the size of something be expressed in decibels.. ?
If your goal is misunderstanding the science it doesnt take much effort… as you proved

george e. smith
Reply to  Steven Mosher
May 11, 2016 12:00 pm

Well Earth is never in thermal equilibrium. You can tell that just from the fact that at almost any location on the planet, much of the time, the temperature starts to increase, around sunrise, and keeps getting hotter, until late in the afternoon, when it starts cooling down.
Meanwhile the other side of the planet in darkness, keeps on radiating, and getting colder and colder, until at last sunrise occurs.
And in between all kinds of chaotic things occur, that are often entirely unpredictable.
G

Pop Piasa
Reply to  Steven Mosher
May 11, 2016 1:58 pm

Thanks G, you’re grounded in reality. Wish i’d said that.

Reply to  Steven Mosher
May 11, 2016 2:16 pm

Could express that as an equation I can implement ?
I still find your definition fuzzy .

Alex
Reply to  Steven Mosher
May 11, 2016 11:50 pm

Mosh
You couldn’t possibly be saying that the use of a specific term like ‘dbsm’ could be connected to the use of a vague term called ‘forcing’? Could you? I suggest pulling up your slip because your bias is showing.
‘Forcing ‘ is not a word used in any field of science (AFAIK).
Forcing is a word used by activists because of its obvious connotations.
‘If your goal is misunderstanding the science it doesnt take much effort… as you proved’
I think your use of ‘science’, in this case, is a misnomer. More like shamanism.
I quite admire the technology of NASA and the manufacturers of sophisticated sensors. I just have a problem with the many activist scientists(term used loosely), Who pervert the data to their own ends and use inflammatory terms like ‘forcing’.
I guess you have to eat and provide for your personal needs. Principle doesn’t come into it.

whiten
Reply to  Steven Mosher
May 12, 2016 2:38 pm

Steven Mosher
May 11, 2016 at 10:54 am
Its not that hard to understand.
In its idealized state the earth system is in “equillibrium” energy out and energy in balance.
………….
If your goal is misunderstanding the science it doesnt take much effort… as you proved.
———————–
Mosher, is your goal to misinterpret and confuse the science about climate and atmosphere!?
Atmosphere is a part of the earth system, is not it.
TCR stands for atmospheric or climate not earth’s……
As far as I know there is no any similar term covering the earth system response to RF.
For lack of better word, the idealized state of the earth system been in “equilibrium”, could be considered as always been that way, in “equilibrium”……. when in the same time, that of atmosphere could be considered as almost never in “equilibrium”:
When looking for a fluctuation or trying to find a period of anomaly in the earth system energy balance will be like trying to find a needle in a haystack…at the same time finding of the needle in the haystack when it comes to atmosphere. will be like that when trying to find the periods when the atmosphere is actually in energy balance…………when ToA energy imbalance is neither positive or negative, in an idealized state of “equilibrium”……
cheers

davideisenstadt
Reply to  Steven Mosher
May 13, 2016 4:48 am

Alex:
“forcing” is a term encountered in 1st year physics, when harmonic resonance is discussed.
Really, I dislike Mosh’s snark, equivocation and pretentiousness, as well as his employers’ appropriation of the name: “Berkley”, but in this case youre simply incorrect.

Seth
Reply to  Alex
May 12, 2016 12:45 am

Alex wrote‘Forcing’. What an amusing term.
It’s a universal concept in modelling a system that has inputs.
https://en.wikipedia.org/wiki/Vibration#Vibration_testing
Alex wroteI have only come across it in the pseudo science called climate science.
While it’s perfectly plausible that you haven’t come across the concept, I think you’re mistaken in your characterization of climate science as a pseudo science.
In fact there a lot of peer reviewed papers published by very prestigious scholarly publishing groups on the subject.
NPG has a whole journal dedicated to it.
http://www.nature.com/nclimate/index.html
You’ll notice there’s no Nature – Ghosthunting nor Nature – Alternative Medicine journals.

Alex
Reply to  Seth
May 12, 2016 2:16 am

A direct physical action like vibration imposed in some mechanical way is not what I was talking about. I was refering to spurious correlations to meet an agenda and the language used to reach that aim.
Peer review in some fields are found wanting. ‘prestigious scholarly publishing groups’ – The ones who put human knowledge behind paywalls? I don’t have much respect for them. I go to Scihub for these papers, they are free there.
You referred me to a website and claimed it wasn’t connected to Nature. It was.

Reply to  Alex
May 12, 2016 8:57 am

So how much have you read in differential equations? Do you prefer “perturbations”, “transients”, “drivers”, “inputs”, “exogenous variables” (more common in autoregressive time series analysis than in diffeqn)? “Forcing” is well-defined and understandable. The CO2 theory of climate change has problems, but use of the word “forcing” is not one of them.

Alex
Reply to  matthewrmarler
May 13, 2016 1:47 am

I do prefer the earlier terms because they are neutral. We are in the climate wars and I will give no quarter. Perhaps I am being picky. You clearly have no issue with the term. I have accepted it without really thinking about it as well. Now I think otherwise. It’s a pity that the twaddell is used in SG readings. I would love for it to be used somehow with climate science.

Eliza
May 11, 2016 1:24 am

How about waking up and assume 0 effect

Reply to  Eliza
May 11, 2016 4:28 am

I don’t think that here is the right place to discuss the basic physics of radiative heating.

Pop Piasa
Reply to  frankclimate
May 11, 2016 2:00 pm

Well, some do need to learn.

Stephen Richards
May 11, 2016 1:43 am

avoids the periods of temperature data with great uncertainty in the early years of the observations
I think that the inverse might be more accurate. The adjustments made by National weather services make the current data unusable.

May 11, 2016 1:46 am

The Forcing-data I take from the IPCC AR5 appendix (Tab. A II 1.2
The solar radiative forcing used by IPCC is very likely not correct. Modern reconstructions of TSI [based on the recent revision of the group sunspot number] show quite a different picture:
http://www.leif.org/research/Solar-Radiative-Forcing-AR5-vs-TSI.png
The main difference is the rise of the minimum-values 1900-1950, that most likely didn’t really happen.
I don’t know what difference this makes to your analysis, but it is always good to avoid input data that are defective.

Reply to  lsvalgaard
May 11, 2016 1:54 am

Revision of previous measurement data to be used in new reconstructions are defective data too.
Unless you can go back and recount the sunspots.

Reply to  Mark
May 11, 2016 3:13 am

No need to, as the sunspots have already been counted.

Reply to  Mark
May 11, 2016 3:37 am

No revised number is DATA. Any revised number is an estimate. Referring to these numbers as DATA bestows undeserved credibility, playing into the hands of the manipulators..

Reply to  firetoice2014
May 11, 2016 3:41 am

This is not correct. If two observers differ in their count by a factor of two, e.g. by using different sized telescopes, multiplying the weaker one by two, makes it comparable to the stronger one. Both are DATA. The revision simply puts both on the same scale.

Reply to  lsvalgaard
May 12, 2016 9:05 am

It is with some trepidation that I take issue with lsvalgaard.
If two observers using the instruments you describe make the observations you describe under the conditions you describe, we have two pieces of data: the count of objects observed by the observer with the smaller telescope; and, the count of objects observed by the observer with the larger telescope. However, the comprehensiveness of each of those counts is limited by the instruments; that is, the larger telescope allows counting of objects not visible through the smaller telescope. You can assume, logically, that had the observations been made under identical conditions by identical telescopes, the counts would have been identical. However, that logical assumption is not another data point.
While in your example, the telescope which is twice as large permits observation of twice as many objects, that is no guarantee that an even larger telescope might/would permit observation of an even larger number of objects at the same time under the same conditions. Also, there is no guarantee that the number of objects observable by your larger telescope would always be twice the number observable by your smaller telescope, since that would be a function of the size distribution of the objects observable with the largest telescope. Otherwise, there would be no need for progressively larger telescopes, since the counts taken through smaller telescopes could simply be doubled. I doubt you would be comfortable stating that a further doubling of telescope size would result in counting twice again as many observable objects.

Reply to  firetoice2014
May 13, 2016 11:30 am

This ‘problem’ has been with us for ~150 years. The solution is simple and effective: we select a certain [rather small telescope] as the ‘standard’ telescope and a certain observer using that standard telescope a the ‘standard’ observer. Then there is no ‘logic’ issue, just a question of comparing with the standard [eventually through in intermediate observer: A with B, B with C gives you A with C]

MarkW
Reply to  Mark
May 11, 2016 8:04 am

If we take the data that we assume to be wrong, and adjust it to better match the data we assume to be right, we are assuming that we have increased the overall accuracy.

Reply to  MarkW
May 11, 2016 8:06 am

Remove the word ‘assume’ and you’ll be correct.

MarkW
Reply to  Mark
May 11, 2016 9:50 am

It must be nice to be so confident in yourself that even your assumptions are facts.

Flyover Bob
Reply to  Mark
May 11, 2016 10:06 am

lsvalgaard,
Multiplying the weaker data by what ever factor of the stronger instrument (2, 4, or what ever) would be fine if the same event was being observed at the same time. My understanding is these were two separate events. Multiplying the earlier observation by the later factor produces an assertion not data.

Reply to  Flyover Bob
May 11, 2016 10:49 am

The factor is determined by comparing observations of the same spots at the same time.

Reply to  Mark
May 12, 2016 8:11 am

Firetoice2014 said: “No revised number is DATA”. Yes. Yes it is. So is the number of angels that can dance on the head of a pin. Just because a number is meaningless doesn’t mean it isn’t data. You also said “Any revised number is an estimate.” So what. Nearly all measured data are estimates, based on other data that are also usually estimated. Your comment indicates that you either grew up in the digital age and hence haven’t seen something like an old style triple beam balance, or you never had to measure things precisely. If a balance has a line for every gram, a chemical technician will estimate mass to the nearest 10th of a gram. Even a digital scale where that is not possible is estimating. There are numerous uncertainties involved. That is why proper data analysis includes estimates of error. Even some things that are counted, such as sunspots, are estimates. There are criteria, such as how big a spot is before it is counted. We can “see” much better now than we could 100 years ago because we have better optics, so we can “see” spots on the sun that were not visible 100 years ago. To correlate today’s count with that of 100 years ago, we need the criteria. If you revise the basis for what that minimum size is, then the number of spots on the sun that get counted as sunspots could change. And if there are a number of spots close to the minimum size, the number can also change depending on what (and who) is measuring the size. If someone reviews the data and it indicates that the original estimate is incorrect, one revises the data. The revision is still data, still an estimate and still useful. Happens all the time in science. Dr. Svalgaard may need to correct my assertions about sunspot counts.

Reply to  Mark
May 12, 2016 9:02 am

Mark W: If we take the data that we assume to be wrong, and adjust it to better match the data we assume to be right, we are assuming that we have increased the overall accuracy.
If you were to discover part way along that some of your temperatures were in Celcius and the others in Fahrenheit, you would make some adjustment or another. The adjusted data would still be data. It is the same with Leif Svalgaard’s adjustment for telescope power. When you have a good case that something must be adjusted to fit something else, not to do so would be incompetence or malfeasance.

Dermot O'Logical
Reply to  lsvalgaard
May 11, 2016 2:15 am

The solar radiative forcing used by IPCC is very likely not correct.
That’s really interesting. A Mark 1 Eyeball inspection of the chart suggests that until about 1900, the IPCC reconstruction understated the incoming energy budget by about 0.1 W/m2, so though Earth was cooler, it was getting more energy than previously thought.
That would imply that the more recent warming is less a result of the “Modern Maximum” in solar activity, and more… something else.
It’s (not?) the Sun?

RACookPE1978
Editor
Reply to  Dermot O'Logical
May 11, 2016 3:13 am

lsvalgaard.

The solar radiative forcing used by IPCC is very likely not correct.

Well, then let us re-phrase the question.
http://spot.colorado.edu/~koppg/TSI/TSI.jpg
“Total Solar Irradiance (TSI)
34 years – Instrument offsets are unresolved calibration differences, much of which are due to internal instrument scatter (see Kopp & Lean 2011).”
Granted the apparent differences in TSI over time, decreasing from 1372 watts/m^2 in the mid-1980’s down to today’s accepted 1362 watts/m^2 at TOA.
But.
Hansen’s (and allof his cohorts) hysteria was fueled by THEIR calculations in the mid to late 80’s (when they ran their super-computer programmed GCM models using 1372, 1370, 1367 watts/m^2, right? They had to: That was the accepted TSI radiation coming in.
But, if the actual TSI was NEVER 1372 watts/m^2, then NONE of their programs could have been right.
NONE of the CAGW community’s conclusions could have been right then, nor now, UNTIL the GCM models are re-run with the TSI set to your new standard value of 1362 watts/m^2. And I understand that 1362 was not determined, not revised nor released and published until late 2010.
So, no statement nor conclusion about future global warming made before 2010 – unless it is based on calculations using 1362 watts/m^2 at TOA – is correct. Or, any such fear of +3 watts/m^2 “forcing” from CO2 must be corrected with a drop in TSI corresponding to -10 to -7 watts/m^2.

Reply to  lsvalgaard
May 11, 2016 3:18 am

Oh right So I obviously misinterpreted “revised” in terms of how the data was revised. Fair kop Isvalgaard

AndyG55
Reply to  lsvalgaard
May 11, 2016 4:25 am

Pity you told us all you were going to “adjust” this data before you found a reason to do so.
Are you related to Tom Wiggly?

Reply to  lsvalgaard
May 11, 2016 5:39 am

Thanks Leif for standing by. I think you are right, the very tiny changes ( in relation to the total forcings) after 1940 should not have any impact on the conclusions. Anway: Where can I get the numerical data for the corrected solar forcing?

Reply to  frankclimate
May 11, 2016 6:12 am

The IPCC value for solar radiative forcing is 0.137 W/m2 for 1 W/m2 change in TSI. The theoretical value would be 0.7/4 = 0.175 W/m2.

Reply to  frankclimate
May 11, 2016 9:26 am

Thanks Leif for the data… as assumed: no impact on TCR-guess. The post-2011 date are very valueable!

Pop Piasa
Reply to  frankclimate
May 11, 2016 2:14 pm

My gut says that there is a climatic connection to the heliosphere, but the sunspot number is only correlative and not an absolute indicator of heliospheric conditions.

Reply to  Pop Piasa
May 11, 2016 4:56 pm

The heliosphere is controlled by the sun’s magnetic field, which does follow the sunspot number.

Andrew_FL
Reply to  lsvalgaard
May 11, 2016 6:32 am

“I don’t know what difference this makes to your analysis”
The solar forcing basically amounts to a rounding error, so, basically, none.

Reply to  Andrew_FL
May 11, 2016 7:49 am

So, you don’t subscribe to “it’s the Sun, stupid!” This is good.

Andrew_FL
Reply to  lsvalgaard
May 11, 2016 10:57 am

I’m simply stating a fact. the changes the IPCC assumed in solar irradiance are negligible, changing them to your even…negligiblier estimates, doesn’t make a difference worth caring about.

MarkW
Reply to  Andrew_FL
May 11, 2016 8:05 am

Only if you assume that direct TSI is the only way the sun influences the earth’s climate.

Reply to  MarkW
May 11, 2016 8:07 am

All the other solar indicators vary like TSI, so as TSI goes, so go the rest…

Andrew_FL
Reply to  MarkW
May 11, 2016 10:58 am

This is what the analysis assumes. So changing the input to a slightly smaller value than the already negligible value assumed, isn’t going to change the answer the analysis gives. This was his question. That’s the answer to the question.

MarkW
Reply to  Andrew_FL
May 11, 2016 9:51 am

Thank you for making my case for me.
What you can’t assume is that all the other drivers are as minor as TSI.

Reply to  MarkW
May 11, 2016 10:47 am

Don’t need to assume anything. Since all the other influences vary as TSI does, if TSI does not show any effect, the others don’t either.
You cannot assume that the other variables have any effect. Especially since there is no evidence for any.

Andrew_FL
Reply to  MarkW
May 11, 2016 11:00 am

I don’t. The analysis does. The question was how would the analysis change if you changed the input slightly, not what would happen if you assumed something completely different.

Reply to  Andrew_FL
May 11, 2016 10:57 am

“Only if you assume that direct TSI is the only way the sun influences the earth’s climate.”
yes dont forget the “unicorn” force of the sun..

MarkW
Reply to  Andrew_FL
May 11, 2016 11:52 am

Steven, even by your own pathetic standards, that was weak.
It is nice when the defenders of CAGW go out of their way to proclaim to the world, how ignorant they are of basic science.

Andrew_FL
Reply to  MarkW
May 11, 2016 12:13 pm

You guys think maybe you could have this argument without replying to my comment, so I don’t keep thinking I have something I need to respond to?

Alex
Reply to  Andrew_FL
May 12, 2016 12:04 am

Andrew_FL
You are merely a pawn in this game. Don’t presume you are a player. Your comment is merely an excuse for bickering.

Andrew_FL
Reply to  Alex
May 12, 2016 8:11 am

Ha ha ha.

May 11, 2016 1:48 am

Two records now show the forcing to be low. And the reason that the models are tuned high has been identified.
This is reaching the point where the Precautionary Principle no longer needs to be applied.
Bit of a blow for climate profiteers.

Pop Piasa
Reply to  MCourtney
May 11, 2016 2:29 pm

Well put, sir. I join you in your call to common sense.

May 11, 2016 1:49 am

There is no certainty in this work, first of all the time series from 1940 is bogus. That data has been altered beyond any recognition decades after the measurements were taken, I just dont accept this. Each adjustment required strong empirical support, of which there was none as their could not be, not with measurement data.
So I will never accept any work based on bogus statistics.
Certainties are only so within the confines of the model. The certainties have no relation to the real physical world.
They lowered the low end to 1.5c because that is well within the range of natural variability which was another trick to keep the models relevant.
The new style model spread instead of spaghetti string also shows this, as only two models track temperature, and the fudging, tweaking and so on.
This process is bogus from the beginning, too many assumptions and bogus certainty levels

billw1984
Reply to  Mark
May 11, 2016 5:32 am

Now that the estimates of TCS and ECS are coming down, the IPCC
(and the Paris talks and enviro-loons) wants to change the level of
dangerous climate change to 1.5 C. This is since 1850 or 1880, which
was a cool period and we have already had about half of this 1.5 C.
Mankind has really suffered as the temperature has increased this
0.7 to 0.8 degrees. In fact, several world wars, and many genocides
were caused by this temperature rise since 1850. And the population
of humans on the planet is much lower and in poorer health than in 1880.
So, I really see their point. These are some sharp people. I have an
infinite level of respect for intellects of this caliber.
When you realize that the temperatures in the earliest parts of the record
have been adjusted downward (possibly correctly) IIRC about 0.4 degrees,
then about half of the 0.8 rise since 1880 is possibly from adjustments. The
declaration that 2.0 C might be dangerous (aside from being pulled out of
someone’s nether-regions) was made with the older, warmer 1880 temperatures
before they were adjusted. So, the actual danger point, if it were really possible
to say anything about this at all (given the process of evolution by which every
creature on earth can adapt to changes like these) was for a real temperature
2.0 C above the “assumed real temp.” in the 1880’s before the adjustment.
Lowering the assumed temp. does not change the danger point. The “danger”
point is a real temperature, not a delta T due to changing a number on paper.
If we were to change the past temperatures again
(lower) or raise what we thought the current temperature today is by 2.0 C in the data table,
(but not in the real world) that would not make us all get heat stroke. This is
another thing that some don’t seem to understand.

Reply to  Mark
May 11, 2016 11:02 am

TCR should actually be calculated with a longer time series as Nic Lewis has done
Basically you take two periods
One period around 1880-1900
The second period the last 15 years.
From these two you get a DELTA temperature.
The bigger the delta, the higher the sensitivity.
if you want to use RAW data for 1880-1900 and raw data from 2000-2015
Your answer
will be
a
higher sensitivity.
Own goal

MarkW
Reply to  Steven Mosher
May 11, 2016 11:53 am

Interesting how the CAGW’ers actually want to include data from a time when CO2 wasn’t increasing but temperature was, to prove that CO2 causes temperature to rise.
Speaking of an own goal.

Reply to  Mark
May 12, 2016 9:16 am

Mark: Each adjustment required strong empirical support, of which there was none as their could not be, not with measurement data.
All of the adjustments have been motivated by changes in the thermometers (relocation, change, aging, time of day of recording), correlations among the thermometer series, and discernable changes attributable changes in nearby land use.
Some people may be overconfident of the results, but none of the work is “bogus”.

Tony
May 11, 2016 2:06 am

A triumph of computation over common sense. I suppose CO2 will be responsible for the next ice age or do we just modify the forcing?

Alex
Reply to  Tony
May 11, 2016 2:11 am

May the forcing be with you

Steve Fraser
Reply to  Alex
May 11, 2016 9:08 am

That Is _so_ last week…

Alex
Reply to  Alex
May 12, 2016 12:08 am

My apologies. I might have been busy with real things last week

Steve O
May 11, 2016 3:12 am

“Forcing”, whether of people, science, or thought (in the 21st century) is from the Government in power. Political Power is using the forcing of jail or fines (or just a lawsuit) on citizens to make them conform to the Fiat Policies of the moment spoken into “law” by the powers that be.

charles nelson
May 11, 2016 3:27 am

I see you got rid of the 1940s blip!

seaice1
May 11, 2016 3:38 am

It would be very interesting to see a sensitivity analysis with different start and end dates. If the result remains constant it would give extra confidence in the result. Shorter periods would presumably lead to more scatter, but could reveal any systematic errors if, for example, starting later gave a consitently different result from ending sooner.
We must also not that the TCR is not the same as the ECS.

Reply to  seaice1
May 11, 2016 4:44 am

In a recent survey http://www.bitsofscience.org/real-global-temperature-trend-climate-sensitivity-leading-climate-experts-7106/ 13 experts ( such as M. Mann, G.Schmidt, S.Rahmstorf, G.Hergerl, P. Forster) were asked about their guesses about the Equilibrium Climate Sensivity (ECS). The result of it: the most likely value is around 3 that means that after reaching an equilibrium also in the oceans after a few centuries the GMST will show an increase of 3 K after a doubling of CO2. This is a theoretical value, much more interesting for the developement up to 2100 is the TCR which was discussed here. One of the asked experts is Piers Forster, he is also a co-author of Millar et al (2015) https://www2.physics.ox.ac.uk/sites/default/files/2012-12-14/millaretal2015_untypeset_pdf_56359.pdf . In this paper the relation TCR/ECS is investigated.
http://up.picr.de/25508224ck.png
The most likely TCR/ECS fields from Models and observations. (Fig. 4d of Millar et.al (2015)).
The “best” estimate of the mentioned survey of ECS=3 is for sure deduced from the model projections. The observed TCR of 1.19…1.39 indicates a value of only about 2 for ECS.

seaice1
Reply to  frankclimate
May 11, 2016 7:35 am

Thank you for the elucidation of the difference between TCS and ECS.

MarkW
Reply to  seaice1
May 11, 2016 6:52 am

Since there was very little increase in CO2 prior to 1940, why would extending the study to include earlier data mean anything?

seaice1
Reply to  MarkW
May 11, 2016 7:31 am

MarkW. The data was restricted to post 1940 to avoid unreliable data. If the data were reliable then extending it backward would surely be a good idea. however, I took the reason at face value and suggested different, not necessarily longer periods. If you take 1941, then 1942 etc do you get a different result? if you end in 2014, 2013, 2012 etc do you get the same result? I would expect the results to scatter more as the sample period shortened, but would not expect any systematic changes.

MarkW
Reply to  MarkW
May 11, 2016 8:06 am

I love the way trolls actually pretend that they are responding to your point.
The point is to calculate the transient response to CO2 increases. As such, including data that covers a time when CO2 wasn’t increasing would be nonsensical, unless your point is not to calculate the actual transient response, but rather find a way to make the data fit your preconceived theory.

Reply to  MarkW
May 11, 2016 9:02 am

MarkW: The increasing of GHG is represented in the forcings. The scatter-diagram was NOT CO2 versus Temperatures ( which would indeed make a bias) but versus the forcings. The temperatures in the early part of the time span were lower due to the lower forcing which is well included in the scatter-diagram. If one uses a too short time span than the natural variability ( nor forced) would have too much influence.

Reply to  MarkW
May 12, 2016 9:11 am

Mark W: I love the way trolls actually pretend that they are responding to your point.
Quote a particular claim by a particular person. Please.

Reply to  seaice1
May 11, 2016 11:04 am

“It would be very interesting to see a sensitivity analysis with different start and end dates. If the result remains constant it would give extra confidence in the result. Shorter periods would presumably lead to more scatter, but could reveal any systematic errors if, for example, starting later gave a consitently different result from ending sooner.”
Nic did this.
Here is what you want to aim at.
1. A start period with little volcanic activity
2. A end period with little volcanic activity.
3. The start and end period should be in the same phase of a natural variation.
4. the longest period you can find.

Duster
Reply to  Steven Mosher
May 11, 2016 9:47 pm

That would depend upon a reliable count of erupting volcanoes. Since that has never been a reality until the launch of earth observing satellites, the “period” is likely to be both very short and unrepresentative of reality over geological spans.

May 11, 2016 3:52 am

As with all these TCR/ECS studies, this one starts out with the assumption that all observed warming over a certain period of time is caused by some hypothetical (and entirely mathematically derived) rise in atmospheric “radiative forcing” on the surface. A fully circular argument, if ever there was one. Pseudoscience, not science …
https://okulaer.wordpress.com/2016/01/10/the-climate-sensitivity-folly/

seaice1
Reply to  Kristian
May 11, 2016 4:41 am

Kristian, to what would you attribute the warming? Warming reqires energy and it must come from somewhere. “Nature” is not an answer.

FTOP_T
Reply to  seaice1
May 11, 2016 4:54 am

It is bright during the day. Look up.
All kidding aside. There is one energy source for all temperature change and one element that has the physical properties to absorb AND RETAIN that energy.
The sun heats the ocean
The ocean absorbs and retains heat based on cloud cover, incident angle, wind speed, etc.
CO2 cannot impact ocean temp due to its inability to penetrate at depth and the massive order of magnitude difference in heat capacity
Man can’t effect ocean temperature
Ocean cycles cause irregular releases of this energy
We observe this (see El Niño) but somehow don’t accept it
All the rest is just hand waving

MarkW
Reply to  seaice1
May 11, 2016 6:54 am

What caused the warming since the end of the little ice age?
What caused the warming of the Mideval, Roman and Minoan warm periods?
What caused the warming of Holocene Optimum?
Since you have ruled at “natural” as the cause, please define a mechanism by which man caused them.

Reply to  seaice1
May 11, 2016 8:29 am

““Nature” is not an answer.”
Why not? “Nature” was the answer throughout the previous 4.5 billion years …
To be more specific: sun/clouds + oceans/atm. circulation. And there you have it 🙂

Vlad the Impaler
Reply to  seaice1
May 11, 2016 10:06 am

To expand on Kristian’s comment, why isn’t “Nature” an adequate answer? Prior to the Industrial Revolution (which began the process of releasing gigatonnes of ‘carbon’ (sic) into the atmosphere), did global climate NEVER change? Was it constant for billions of years, always the same temperature, then suddenly, about two centuries ago, it started changing? And changing uni-directionally? “Nature” never changed the climate, to any degree (no pun intended) whatsoever?
You make the assertion, the burden is on you to prove that “Nature” has never changed the climate at all.
And, yes, that is essentially what you are claiming. Some of us think that “Nature” has an overwhelming hand in the current ‘climate change’, and man’s influence, even IF it can be measured, is trivial. The very “Nature” you dismiss so cavalierly, has changed climate drastically in the past. It is now established within the geological community that the Pleistocene climate changes (yes, they pre-date human influence) were of the order of four-to-five Celsius degrees, within a time span of decades [three or four decades, mind you], and some researches think that time scale is excessive. They argue that the changes took place even faster.
Yet here you are, apoplectic over a ‘change’ in ‘average global temperature’ of about one Celsius degree in a century, give-or-take. Really? Please get over yourself. “Nature” does things man could never hope to do.
“Warming requires energy, and it must come from somewhere.” Very true. It probably comes from a coupled, non-linear, dynamic system, doing what is has been doing for better than 4 ga: storing, releasing, moving, and changing energy patterns, responding to perturbations like meteor impacts, volcanoes, and what-not. Situation: Normal. Forecast: continual change, both directions (in relation to temperature). Dominant ‘greenhouse gas’ (sic): water/water vapor. Equilibrium: not possible (it’s called “weather” and “climate”, and they’re always CHANGING!)
Vlad

Pop Piasa
Reply to  seaice1
May 11, 2016 3:11 pm

Seaice1, you need to consider how much TSI makes it to the surface globally due to constantly changing cloud conditions throughout each day and how much outgoing radiation is modulated by the same variations in cloud conditions through each night.
If you want to identify the random factor of climate, it is clouds. “Nature” is quite capable of providing the variations in watts per square meter delivered to the earth’s surface to accomplish warming or cooling over decadal periods or longer.
You seem macro-focused on the latest developments in climate and troubled about when the present upward temperature trends will end. May I say; be careful what you wish for- as the present climate epoch has been one of the most beneficial in the (incredibly short) history of the human race on this planet.

Reply to  Kristian
May 11, 2016 11:12 am

“As with all these TCR/ECS studies, this one starts out with the assumption that all observed warming over a certain period of time is caused by some hypothetical (and entirely mathematically derived) rise in atmospheric “radiative forcing” on the surface.”
Err No that is not how you do the math,
Understand what TCR is. It is the transient climate response to ALL FORCING.
read Nic Lewis. you look at the change in ALL of the following
A) GHG forcing ( methane, c02, etc)
B) solar forcing
C) Aerosol forcing
D) land Use forcing
Simple You start by computing Delta temperature/ Delta Forcing.
Delta T and Delta F are calculated based on a reference period in the past.
So for example Delta T might be 1C; Delta F might be 1 watt. So the system sensitivity
is 1C per Watt. Increase the sun 1 watt.. the temp goes up 1 watt. Increase the land use forcing 1 watt,
the temp goes up 1 watt.
So, you have to sum all the forcings.. But they All have uncertainties. THESE UNCERTAINTIES
the uncertainties in Delta F are the key.
THESE uncertainties drive the wide envelopes in TCR and ECS.

Reply to  Steven Mosher
May 11, 2016 12:53 pm

*Yawn*

Pop Piasa
Reply to  Steven Mosher
May 11, 2016 3:20 pm

OK I’ll bite. Would the uncertainties of near-surface Delta F be possibly attributable to the variations in cloud nucleation?

Reply to  Kristian
May 12, 2016 9:26 am

Kristian: this one starts out with the assumption that all observed warming over a certain period of time is caused by some hypothetical (and entirely mathematically derived) rise in atmospheric “radiative forcing” on the surface.
It is hypthetico-deductive reasoning: If the major cause of the temp increase is the increase in forcing, then this is the best estimate of the transient sensitivity. It only becomes circular if no one tests assumptions against other data, and if no one tests all the consequences of the computed results. It is not evidence, by itself, that the forcing change was the only cause of the temperature change.
There is another way to put it: Granting for the sake of argument that the IPCC-endoresed mechanism is correct, and that the data are reliable (two assumptions that lots of people accept as at least reasonable), then this is the best estimate possible at this time, and IPCC has exaggerated the warnings, as have other people. This does not mean that you or I must accept those assumptions, but this is an important calculation for communicating with the people who do.

May 11, 2016 4:00 am

So Frank? Why are you trying to get us to believe you have accurate measures of temperature with a precision of 0.1 degrees from 1940? Think we all kust fell off the cabbage truck yesterday?

ECB
Reply to  Bartleby
May 11, 2016 5:29 am

“temperature record, the Cowtan/Way (C/W) ”
Call me puzzled that temperature data sets which include average daily temperatures are used.
It seems to me that the most certain data is the daily max temperature, and the min daily temperatures, where such is measured, as as they do not require a TOD adjustment. Could the pre-1940 data then be used?
Then do separate model runs with the high/low anomalies, then compare.
Would that not make this modelling exercise more instructive on the TSR range?

JasG
May 11, 2016 4:45 am

I think the point was to use their own fake numbers and prove they are still high with climate sensitivity even assuming the invalid upper bound case that all warming is manmade.

May 11, 2016 4:58 am

On climate sensitivity to increasing atmospheric CO2:
While fossil fuel combustion and atmospheric CO2 both increased strongly since about the 1940’s, global temperatures decreased from ~1940 to ~1975, increased to ~2000 and have been flat since – so there is a negative correlation of temperature with CO2, a positive one, and a zero one.
The evidence suggests that a near-zero ECS is the correct answer – CO2 is NOT significant driver of global temperatures, and the alleged global warming crisis does not exist.
Furthermore, please note that atmospheric dCO2/dt varies closely (correlates) and ~contemporaneously with global temperature, and its integral atmospheric CO2 lags temperature by about nine months in the modern data record. CO2 also lags temperature by ~800 years in the ice core record, on a longer time scale. Therefore, CO2 lags temperature at all measured time scales.
Consider the implications of this evidence:
CO2 lags temperature at all measured time scales, so the global warming (CAGW) hypothesis suggests that the future is causing the past [good luck with that]. 🙂
The evidence strongly suggests that temperature, among other factors, drives atmospheric CO2 much more than CO2 drives temperature. This does NOT suggest that human factors such as fossil fuel combustion, deforestation, etc. do not increase atmospheric CO2 – but the increase in CO2 is not harmful and is beneficial.
_______________
On climate model hindcasting and fabricated aerosol data:
The climate models cited by the IPCC typically use values of climate sensitivity to atmospheric CO2 (ECS) values that are significantly greater than 1C, which must assume strong positive feedbacks for which there is NO evidence. If anything, feedbacks are negative and ECS is less than 1C. This is one key reason why the climate models cited by the IPCC greatly over-predict global warming.
I reject as false the climate modelers’ claims that manmade aerosols caused the global cooling that occurred from ~1940 to ~1975. This aerosol data was apparently fabricated to force the climate models to hindcast the global cooling that occurred from ~1940 to ~1975, and is used to allow a greatly inflated model input value for ECS.
Some history on this fabricated aerosol data follows:
http://wattsupwiththat.com/2009/06/27/new-paper-global-dimming-and-brightening-a-review/#comment-151040
More from Douglas Hoyt in 2006:
http://wattsupwiththat.com/2009/03/02/cooler-heads-at-noaa-coming-around-to-natural-variability/#comments
Regards, Allan

MarkW
Reply to  Allan MacRae
May 11, 2016 6:56 am

hear, hear

Pop Piasa
Reply to  Allan MacRae
May 11, 2016 4:48 pm

Well stated, Allen. It should be obvious to a middle-schooler that warming climates produce O2 consumers faster than O2 producers. The result is a (temporary) surplus of CO2 which stimulates the growth of transpiratory life until sycs exceed sources or temperature falls below the limit of organic activity.

Reply to  Pop Piasa
May 11, 2016 7:43 pm

Interesting comment Pop.
Jan Veizer said something similar to me, circa 2008 or earlier, and Jan is very capable.
See Veizer (2005) and the classic Veizer and Shaviv (2003).
Veizer (2005) at
http://www.gac.ca/publications/geoscience/TOC/GACgcV32No1Web.pdf
Veizer and Shaviv (2003) at
http://cfa.atmos.washington.edu/2003Q4/211/articles_optional/CelestialDriver.pdf

Leonard Weinstein
May 11, 2016 5:34 am

Your assumption is that the corrected variation is due to human forcing. There is zero evidence that it is not part of natural variation. this is especially shown for the last 15 or so years, where the actual variation with ENSO effect removed, is a flat to dropping of temperature. In other word, you have cherry picked a start and stop period and come to a meaningless conclusion.

May 11, 2016 5:38 am

… This includes the latest increase of the Global Mean Surface Temperature (GMST) , see Fig. 1, and avoids the periods of temperature data with great uncertainty in the early years of the observations. ~ from post
The early years of the observations was a period where there was no data tampering due to the modern idea that data should be changed to match the preconceived conclusions. (uber conformation bias) I would trust data from the 30s over the current data any day.
Besides that; just because one can graph a little data on a nice plot does not show us the cause. See the work of statistician W.M. Briggs if you fail to understand this part. http://wmbriggs.com/
On top of the above quibbles, I would point out that there is no proof that CO2 is a “forcing” at all. We only have a “consensus” that it is. And, historically, scientific “consensus” nearly always turns out to be wrong.
~ Mark

ConcernedCitizen
May 11, 2016 5:38 am

Isnt this assuming CO2 is responsible for all the warming?
Anyway, the issue is ECS and ‘ocean heat uptake’ of CO2 derived energy.
Since 15 micron em only penetrates 0.0005 cm the ocean cant absorb any significant amount of energy from CO2, this TCR = ECS.

Reply to  ConcernedCitizen
May 11, 2016 11:13 am

No.

MarkW
Reply to  Steven Mosher
May 11, 2016 11:54 am

Yes

Reply to  Steven Mosher
May 11, 2016 11:55 am

I admire you for the patience you show…

Dr. Strangelove
May 11, 2016 5:46 am

TCR estimate from satellite data is more reliable because no issues on urban heat island effect, weather stations siting and sea temperature measurement methodology. From Spencer paper (2008)
“The slopes of the striations seen in the right panels of Fig. 4 (relative to atmospheric temperature) correspond to strongly negative feedback: around 6 Watts per square meter per degree K of temperature change (6 W m-2 K-1). In fact, even though we expect feedbacks diagnosed from the data to be biased toward zero, here the lines fitted to all the data have slopes actually approaching that value: 6 W m-2 K-1. Translated into a global warming estimate, a feedback of 6 W m-2 K-1 would correspond to a rather trivial 0.6 deg. C of warming in response to a doubling of atmospheric CO2.”
http://www.drroyspencer.com/research-articles/satellite-and-climate-model-evidence/

May 11, 2016 6:24 am

ToA solar input is 340 W/m^2. While this incoming radiation is fairly stable the rest of the balance is not. Albedo kicks back about 30% or 100 W/m^2. Even a minor fluctuation in albedo cancels out the minor power fluxes discussed in this thread. Figure 10 of Trenberth’s paper that I have cited elsewhere shows that CO2’s RF influence over the climate’s heating/cooling is trivial, a third or fourth decimal point lost in the natural magnitudes, fluctuations, and uncertainties.

MarkW
May 11, 2016 6:41 am

“and avoids the periods of temperature data with great uncertainty in the early years of the observations.”
that should probably be “greatest uncertainty”, because even today there is still “great uncertainty” when it comes to land based temperatures.

Peta in Cumbria
May 11, 2016 7:08 am

“Radiative Forcing”
Isn’t the world so full of clever people that they all know what that is. We know our future is secure. And the children’s. and the grandchildren.
All I want to know is, what is the temperature of the ‘object’ that is doing this radiating? And I don’t want it to oodles of decimal places, is it warmer or cooler than the object being forced?
Well is it?
Maybe while we’re on, can we have a plot of temperature versus world population, or food production vs temperature or nitrogen fertilser consumption vs temp or vs CO2 concentration.
What about albedo changes from autumn sown crops vs spring sown crops, or temp vs the number of mature trees on the planet ( a large tree using 100 gallons of wwater per day has cooling effect of 100W/sqm just from evaporating that water or albedo changes from the expanding size of cities.
Why does the daily cycle of CO2 concentration in my garden (plus/mius 60ppm per day) look like a magnified version of the Mauna Loa data (plus/minus 3ppm per year)
Maybe the CO2 is not coming from chimneys and exhausts and maybe, temperature has nothing to do with ii’s concentartion in the atmosphere.
Maybe the CO2 is a sympton of something else………

Pop Piasa
Reply to  Peta in Cumbria
May 11, 2016 6:49 pm

CO2 is a symptom of an expansion of the portion of Earth that is above the threshold of aerobic life.

Pop Piasa
Reply to  Pop Piasa
May 11, 2016 6:51 pm

Sorry, the temperature threshold of aerobic life.

May 11, 2016 7:45 am

Nic Lewis stated, “The thus estimated annual mean planetary heat uptake rate over 1995-2015 is 0.63 Wm−2, rather higher than the 0.51 Wm−2 over 1995-2011 used in LC14.”
So I suppose your method would give a similar figure.
I note that James Hansen estimated net radiative imbalance at TOA of 0.58 Wm-2. Stephens et al rounded this to 0.6 Wm-2 in 2012 and in the same year Loeb et al adjusted the figure to 0.5 Wm-2.
So it seems there is additional empirical support from NASA. I wonder why James Hansen does not accept the implications of his own paper.
Hansen, James, et al. “Earth’s energy imbalance and implications.” Atmospheric Chemistry and Physics 11.24 (2011): 13421-13449.
http://content.csbs.utah.edu/~mli/Graduate%20Placement/20110826_EnergyImbalancePaper.pdf
Loeb, Norman G., et al. “Observed changes in top-of-the-atmosphere radiation and upper-ocean heating consistent within uncertainty.” Nature Geoscience 5.2 (2012): 110-113.
http://xa.yimg.com/kq/groups/18383638/336597800/name/ngeo1375.pdf
Stephens, Graeme L., et al. “An update on Earth’s energy balance in light of the latest global observations.” Nature Geoscience 5.10 (2012): 691-696.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.397.3342&rep=rep1&type=pdf

Reply to  Frederick Colbourne
May 12, 2016 9:32 am

Frederick Colbourne: I wonder why James Hansen does not accept the implications of his own paper.
I think that is an illuminating comment. I find in my readings that a number of the people warning of catastrophic CO2-induced warming ignore plenty of evidence, including what they have themselves developed. That is one of the reasons that I think calculations like Nic Lewis’ and those in today’s essay are important: Even if we accept the evidence and theory cited by the IPCC and others, it is clear when using them consistently that the threats of CO2-induced warming have been exaggerated.

May 11, 2016 8:02 am

I added Solar forcing to my data processing (which took about 3 weeks to run, ugh).
But I calculate a relative solar forcing, which you’d multiple by whatever the solar forcing of the day is (in w/m^2) to get actual watt/m^ for forcing.
I have been calculating the slope of the temperature change as the length of day changes, and there for the energy put into the system on a daily basis station by station. Then for the area getting reported on, I average these values. The data is the data, I don’t make up data for stations that do not exist. The larger the area, the more that weather averages out.
So, I have a derivative of station temp for the warming and cooling periods, but not based on absolute temp, but the daily change in temp. So the cooling slope runs from March to October (daily rate of change peaks in March for the Northern Hemisphere and it’s maximum cooling rate is in October). Conversely The warming derivative starts in October and runs to March.
Plotted out, this derivative looks like this.
I think this is either Northern Hemisphere or Global stations.comment image
If you take the slope of each year and plot that out it looks like this.
US SW Desert area (Min,avg,max temp slope, both warming and cooling)comment image
This is sensitivity, I’ve taken the slopes as defined above, and divide that by the relative solar forcing based on Alt and Lat for each day of the year.comment image
The trend line is a 2nd degree Polynomial.
But this doesn’t have any fluctuation from any changes to the Solar Constant, Not only does my data not go back far enough, it seems as it’s being adjusted. I do have data from someplace, but I’m not sure is it’s the new or the old, so for this graph that is not used.

Reply to  micro6500
May 11, 2016 8:05 am

Oh, forgot the most important part, the Sensitivity data is based on the station in the latitude band of N30 to N40 from around the world, since they will all have a similar daily solar input.

Reply to  micro6500
May 11, 2016 8:12 am

Sorry I used the wrong graph for Sensitivity, that was the slope for the specific station in N30-N40
Here’s the Sensitivity graph.comment image
Sorry for the confusion.

May 11, 2016 8:14 am

Frank Bosse writes that ”It’s well known that a doubling of the GHG-concentrations will lead to a forcing of 3.71W/m².” Yes it is well known but it is not correct. According to my published paper it is only 2.16 W/m2 calculated from the formula RF = 3.12 * ln/C/280). Another big error in IPCC’s calculations is that the climate sensitivity parameter (CSP) having a value of 0.5 K/(W/m2) includes the positive water feedback, which is not correct. The CSP without any water feedback is 0.27 K/(W/m2). The correct TCR = 0.27*2.16 = 0.6 K. You can calculate this value also with a pen and a paper.
A short analysis of the IPCC’s model (=RF calculation). The error between the IPCC model (0.5*2.34=1.17 K) and the observed temperature in 2011 was 1.17 °C – 0.85 °C = 0.32 °C, which means a substantial error of 38 %. What is this error in the end of 2015? NOAA is very IPCC-minded organization and they publish the annual RF values of GH gases. The increase from 2011 to the end 2015 (2015 value estimated by the author) is 0.16 W/m2. It means that the estimated RF value of 2015 would be 2.34+0.16 = 2.50 W/m2 corresponding to the global temperature increase of 1.25 °C. This value is 47 % higher than the observed temperature 0.85 °C, which has stayed about constant since 2000. This is illustrated in figure below.comment image
In AR5 IPCC does not show what is the dT value of RF value of 2.34 W/m2. There are 1552 pages in the Physical Science Basis of AR5, but this information cannot be found. I have challenged many people to find it but no results so far. The IPCC staff wrote 1552 pages and could show what is the temperature corresponding the RF value of 2.34 W/m2. But at least 99 % of all readers of AR5 seem to be very happy that there is RF value of 2.34 W/m2. The science is settled.

Reply to  aveollila
May 11, 2016 11:16 am

“Frank Bosse writes that ”It’s well known that a doubling of the GHG-concentrations will lead to a forcing of 3.71W/m².” Yes it is well known but it is not correct. ”
too funny.. that physics is used day in and day our to build devices that work.
also,, the doubling of C02 ( not GHGs ) will lead to an addition 3.71 watts.

Marcus
Reply to  Steven Mosher
May 11, 2016 12:21 pm

..What does the ” physics used day in and day OUT to build devices that work.” have to do with ” GHG-concentrations will lead to a forcing of 3.71W/m².” ?

Reply to  Steven Mosher
May 11, 2016 12:53 pm

Steven,
If you think that IPCC’s RF value of 3.71 W/m2 is correct as well as the CSP of 0.5 K/(W/m2), then you have just one anomaly to explain: why the temperature does not obey these two simple formulas, if the science is correct.

Reply to  Steven Mosher
May 11, 2016 1:00 pm

Steven,
I think that you have not idea, in which way the formula RF = 5.35 * ln (C/280) was calculated. Can you give just one example of an equipment utilizing this formula?

blueice2hotsea
Reply to  Steven Mosher
May 12, 2016 8:13 am

“too funny”
I thought it’s been known for decades that the 3.71 erroneously includes longwave forcing from contrails. Anyway, Pekka (RIP) agreed with my amateur SWAG of 3.6 W/m^2.
If the following is correct then 3.58 is better:
http://adsabs.harvard.edu/abs/2015EGUGA..17.3523S

blueice2hotsea
Reply to  Steven Mosher
May 12, 2016 8:20 am

weird url. try this.
http://adsabs.harvard.edu/abs/2015EGUGA..17.3523S

whiten
Reply to  Steven Mosher
May 12, 2016 10:27 am

Steven Mosher
May 11, 2016 at 11:16 am
also,, the doubling of C02 ( not GHGs ) will lead to an addition 3.71 watts.
————————
But that does not necessarily mean that the addition of watts will by default only subject either the Earth or the Atmosphere to a certain increase of the energy budget…it could very well mean the opposite at a given point of such a watt increase.
Earth is not a simple body… it is a proper working system with its own ability and capability of self regulating …especially when it comes to its energy budget…with a very high accuracy and precision………
And if I am not wrong….watts is not actually energy, is power…..even your electricity bills are as per energy not power….so regardless how much the power you are supplied or subjected to as per your consumption needs, still you will pay the bills per the energy consumed…..even in the case of a power increase.
cheers

May 11, 2016 8:20 am

We need to quit being fuzzy about what we really mean by the “response” in TCR. Here is seems to be surface temperatures measured in louvered boxes at about two meters altitude on land and by distressingly variable methods at sea.
Well and good, but when we get to wondering about “response” variability, might it be well to consider more than this 2m microlayer? Maybe the rest of the ~12km troposphere?comment image
We have no way to be sure we are even seeing a “response” to CO2, as opposed to say…dark energy flux. Whatever we are seeing a response to, the response is clearly zero in the upper troposphere, and it declines progressively above the surface.
What makes this strange is that all that warm surface air should be rising and mixing in.

Peter Sable
May 11, 2016 9:36 am

There is also a low-frequency pattern as the low-pass in fig.4 shows. I want to compare it with the AMO-pattern as it’s described in a modern record suggested by v. Oldenborgh et al. (2009)

I’m glad you see those low frequency patterns. The problem is, there are low frequency patterns you DO NOT see, because you’ve cut off your temperature at 1940. You’ve basically removed from your analysis all frequencies with periods longer than 75 years.
The Null Hypothesis says there’s likely frequency components with energy with periods longer than 75 years. You’ve basically rolled all those low frequency components into your trend line.
Your trend line is statistically meaningless. Nearly all trend lines on complex evolving systems with energy components outside the analysis window are statistically meaningless. I wish I would stop seeing them.
Peter
PS: I realize because there’s limited temperature history that you can’t actually do the analysis. That’s actually the correct outcome: “Not enough data”.
PPS: Proxy temperature histories give an inaccurate but nevertheless significant finding that there are frequency components of the temperature in the hundreds to thousands of years. We don’t know the magnitude (because it’s a proxy and not comparable to thermometers), but you can see a glimmer of the frequency components.

Reply to  Peter Sable
May 11, 2016 10:12 am

Peter,
“The problem is, there are low frequency patterns you DO NOT see, because you’ve cut off your temperature at 1940. You’ve basically removed from your analysis all frequencies with periods longer than 75 years.”
That’s why the wording was “pattern” not “frequecies” or “oscillations”.
.

MarkW
Reply to  frankclimate
May 11, 2016 2:00 pm

Are frequencies and oscillations not patterns?
Your desperation is showing again.

Peter Sable
Reply to  frankclimate
May 11, 2016 6:01 pm

Perhaps you should read the quote, it says “Low Frequency Pattern”.
Also, what MarkW said.

Reply to  frankclimate
May 11, 2016 11:24 pm

Frequencies and oscillations are patterns but not all patterns are frequencies and oscillations.
Your desperation is showing again.

MarkW
Reply to  frankclimate
May 12, 2016 7:20 am

So you admit that you were wrong, but you accuse me of desperation.
Fascinating.

May 11, 2016 9:36 am

Nice post. Especially like the tuning period AMO in the residuals analysis. Hits directly on the anthropogenic attribution problem in a clear simple to comprehend way. Bookmarked for future use.

Reply to  ristvan
May 12, 2016 9:34 am

I second your comment.

May 11, 2016 9:50 am

Thanks ristvan. It’s the method: “KISS” and sometimes it gives a clear picture 🙂

whiten
May 11, 2016 10:43 am

All the numbers for TCR as per the above post, according and as per the definition of 2*CO2 or 2xCO2 are basically wrong and in error by a factor of 2, regardless if the main calculation or the method valid or correct enough.
All the resulting numbers for the TCR must be multiplied by 2, as actually is the same warming you get at up going towards ppm increase as it is when down going from a ppm already increased point to the start point of your calculation………as per the orthodoxy of climatology….where ppm(s) increment means only warming.
But anyway, for as long as the attempt to calculate the TCR or ECR, for whatever purpose or motive, will always be wrong in accordance of what it should mean in regard to reality…..simply because the basic drive and meaning is solely orientated and serving ACC-AGW assessment scenarios, where increment of ppm(s) means always and only warming………so always the numbers have a huge chance to be wrong, but even by any chance the number been right, still the scenario assessed will be a ACC-AGW scenario, not a natural one………..so TCR or ECR whatever value wont help really assess a past or present or a future that is not anthropogenic…….no good at all for a natural assessment of nature as per nature without the so much claimed anthropogenic effect……….
cheers

May 11, 2016 11:50 am

Do you find the claims of global warming potential (GWP) for various ghgs suspicious? Have you been puzzled that CO2 which can absorb terrestrial radiation at only 15 microns (pressure etc. broadening spreads this to about 14-16 microns with peak at 15) is considered to have greater GWP than water vapor which can absorb radiation at hundreds of different wavelengths?
The EPA erroneously asserts GWP is a measure of “effects on the Earth’s warming” with “Two key ways in which these [ghg] gases differ from each other are their ability to absorb energy (their “radiative efficiency”), and how long they stay in the atmosphere (also known as their “lifetime”).” https://www3.epa.gov/climatechange/ghgemissions/gwps.html
This calculation overlooks the fact that any effect the ghg might have on temperature is also integrated over the “lifetime” of the gas in the atmosphere so the duration in the atmosphere cancels out. Therefore GWP might not mean what you think. It is not a measure of the relative influence on average global temperature of ghgs on a molecule basis or weight basis.
The influence on average global temperature of a ghg molecule depends on how many different wavelengths of EMR the molecule can absorb. Water vapor molecules can absorb hundreds in the wavelength range of terrestrial radiation compared to only one for CO2.
A consequence of this is CO2 has no significant effect on climate as demonstrated at http://globalclimatedrivers.blogspot.com

May 11, 2016 12:56 pm

The post finds a TCR of 1.22 using a reduced aerosol factor.
Another 10 to 15 years of flat or declining global average surface temps and/or satellite sourced atmosphere temp will yield an even lower estimated TCR assuming CO2 emissions continue on their current trends. Will the estimated TCR then be significantly below 1.0? Seems so.
John

Reply to  John Whitman
May 11, 2016 1:25 pm

It won’t be because the value of 1.3…1.4 for non reduced aerosol forcing stands for different time spams as it’s shown in Nic Lewis post(cited in the beginning of the actual post). So the thesis is: also in 2030 ( 15 years from now) there is a TCR of about 1.35. You can recalculate for your self what this means for the guesses in 2100. Or too much provided??

May 11, 2016 1:09 pm

The TCR to natural CO2 changes can be seen in the geologic record prior to the ~1850 start of the industrial revolution . N’est ce pas?
John

Bruce of Newcastle
May 11, 2016 1:26 pm

The handling of the ~60 year cycle is correct, since the analysis is going peak to peak, ie one full cycle. By contrast the IPCC start theirs in 1906, which is the bottom of the cycle and finish in 2005 at the top of the following cycle. Which adds an artefact worth about 0.3 C.
Unfortunately the second omitted variable bias problem is the indirect solar forcing. The Sun hit grand maximum around 2005 as well. Without this second artefact also being removed this calculation of TCR is likewise too high. Given the magnitude of indirect solar warming last century is similar to the magnitude of the ~60 year cycle, that suggests the real TCR is closer to the CERES and ERBE derived value of about 0.7 C/doubling.

May 11, 2016 1:53 pm

Point the First: The earth’s carbon balance involves a lot more than atmospheric CO2. Carbon is found in the carbohydrates and sugars in terrestrial vegetation, sea weed, algae, in the calcium carbohydrates in shell fish and coral and limestone, CO2 dissolved in the ocean, permafrost, fossil fuels buried in the ground, etc. Carbon in these various forms is stored in pools/reservoirs and flows back and forth, absorbed and released, between these reservoirs as fluxes at 100s Gt/y rates. (Anthropogenic CO2 net rate is 2.0, not 2,000, not 200, not 20, 2.0!!!)
Per IPCC AR5 Figure 6.1 there are 46,713 Gt(Pg) of carbon in the global system. The uncertainty is +/- about 850 Gt, a total uncertainty range of 1,700 Gt, +/- 3.6%.
Before 1750 there was 589 Gt of atmospheric CO2, 589/46,713 = 1.3%. This atmospheric pool is 34.6% of the total uncertainty. In 2011, after 261 years of anthropogenic CO2 production, there was 829 Gt of atmospheric CO2, 829/46,713 = 1.8%. This larger pool is 48.8% of the total uncertainty. This 240 Gt is not an increase in the total balance amount, but simply an 0.5% rearrangement of the existing pools and fluxes.
IMHO with an uncertainty range of 1,700 Gt nobody can say with any certainty whether that miniscule 0.5% rearrangement/change was due to natural variations, ocean outgassing, land use changes, sea floor volcanic activity (nobody knows what’s happening on the ocean floor) or anthropogenic sources.
Point the Second: IPCC AR5 table SPM 5 shows the following W/m^2 RF due to increased GHGs between 1750 and 2011:
CO2 – Min/Ave/Max – 1.33/1.68/2.03
CH4 – Min/Ave/Max – 0.74/0.97/1.20
GHGs – Min/Ave/Max – 1.13/2.29/3.33
Figure 10 in Trenberth et. al. 2011 (Atmospheric Moisture Transports from Ocean to Land and Global Energy Flows in Reanalyses) shows the power flux values for eight models/studies/analyses that were the subject of the paper. (Watt is a power unit, not an energy unit, 3.41 Btu/Wh or 3.6 kJ/Wh. (English hours with Btu, metric/SI hours w/ kJ)
What happens inside the system stays in the system. All that matters is the net flow at ToA. If fewer W/m^2 leave than enter, the temperature will increase. So the net effect of GHGs should be reducing the W/m^2 leaving ToA by 2.29 W/m^2.
Seven of the eight analyses modeled net cooling, ranging from -31 W/m^2 to -1.1 W/m^2. Compare that to the 2.29 W/m^2 from GHGs. The average of all eight was still -3.4 W/m^2 cooling.
1) Anthropogenic CO2 is trivial, lost in the magnitudes, fluxes and uncertainties of natural variations.
2) The additional atmospheric CO2’s RF is trivial, lost in the magnitudes, fluxes and uncertainties of natural variations.

Reply to  Nicholas Schroeder
May 12, 2016 2:31 am

Interesting comments Nicholas.
And see my above post at
https://wattsupwiththat.com/2016/05/11/the-transient-climate-response-tcr-revisited-from-observations-once-more/comment-page-1/#comment-2212064
[Excerpt]
Furthermore, please note that atmospheric dCO2/dt varies closely (correlates) and ~contemporaneously with global temperature, and its integral atmospheric CO2 lags temperature by about nine months in the modern data record. CO2 also lags temperature by ~800 years in the ice core record, on a longer time scale. Therefore, CO2 lags temperature at all measured time scales.
Consider the implications of this evidence:
CO2 lags temperature at all measured time scales, so the global warming (CAGW) hypothesis suggests that the future is causing the past. 🙂

Reply to  Nicholas Schroeder
May 12, 2016 10:30 am

Nic – Yup! Good work.
And corroborated by the simple ‘top down’ analysis run on a desk top computer which calculates average global temperatures that are a 97% match to measured values since before 1900 even when the effect of CO2 is ignored. Accounting for CO2 increases the match by only 0.1%. http://globalclimatedrivers.blogspot.com

bw
Reply to  Nicholas Schroeder
May 12, 2016 11:34 am

The IPCC seems to use “carbon” when it means CO2. The 829 gigatonnes refers to “carbon”
The amount of CO2 in the atmosphere is about 3000 gigatonnes. Simple calculation.
CO2 exchange between the surface and atmosphere is about 600 gigatonnes per year. The amount of anthropogenic CO2 is 30 gigatonnes per year.
CO2 never accumulates in the atmosphere, any more than water accumulates in a river.

Reply to  bw
May 12, 2016 3:08 pm

No, IPCC means carbon. It’s the only way to treat the pools and fluxes evenly. Per Figure 6.1 footnotes the conversion is ppmv / 2.12 = Gt Carbon. Basically correcting CO2 volume to CO2 moles/mass by 44/28.69 and CO2 to Carbon by 44/12 or 3.67.
1750: 589 GtC / 2.12 = 278 ppmv Proxies
add 240: 240 / 2.12 = 113 ppmv WAG estimates & dry labbed 43% of total Anthro to make numbers work.
2011: 829 / 2.12 = 391 ppmv MLO

bw
Reply to  bw
May 12, 2016 4:53 pm

The amount of Carbon dioxide in the Earth’s atmosphere is 3120 gigatonnes.
This link has the simple conversion using IPCC ppm numbers.
https://www.skepticalscience.com/print.php?r=45
Several references show the mass of Earth’s atmosphere is 5200000 gigatonnes. 400ppm by volume times 44/29 is 607ppm mass. The mass proportion of Carbon dioxide is 0.000607
5200000 times 0.000607 is 3156 gigatonnes carbon dioxide.

Reply to  bw
May 12, 2016 6:25 pm

383 ppmv CO2 * 44.00 CO2 / 28.69 Air = 582 ppmm CO2
582 ppmm CO2 * 5.148 E18 Pg Atmos =2.995 E3 Pg CO2 (Your value or close enough)
2.995E3 Pg CO2 / 3.67 = 815.97 Pg C
815.97 Pg C / 383 ppmv CO2=2.12 Pg C / ppmv CO2

May 12, 2016 7:26 am

Have been posting this and similar comments to various articles on Facebook, MSN news, etc. Using Facebook since HuffPo kicked my off their site. Facebook just suspended my account. I had to prove I wasn’t a spambot and agree to play nicey, nicey. Guess I’m afflicting the comfortable.

Reply to  Nicholas Schroeder
May 12, 2016 3:10 pm

Follow up
FB “So you have been blocked.”
“If you’re temporarily blocked from sending messages, it may be because you sent a lot of messages recently or your messages have been marked as unwelcome.
This block is temporary (thru June 11), and you can still use other Facebook features to connect with your Facebook friends while you’re in this block. Once your block is over, please only send messages to people you know personally. Make sure to use your authentic name and picture to help the people you’re messaging recognize you.
To learn more about our policies, please review the Facebook Community Standards.
Note: If you’re blocked from sending messages, you may also be temporarily blocked from sending friend requests.”
Don’t publish articles and ask for comment. Don’t make your E-mail or affiliations public.
If you don’t want to play the game, stay off the court.
“If the freedom of speech is taken away then dumb and silent we may be led, like sheep to the slaughter.” George Washington

May 12, 2016 9:33 am

Frank Bosse, thank you for a good, and well-focused, essay.

Reply to  matthewrmarler
May 12, 2016 11:12 am

It was my pleasure!