Briffa embraces tree ring uncertainty in new study, gives 'message to the paleoclimate community'

From the UNIVERSITY OF OTAGO and the “one tree doesn’t work anymore” department:

1280px-Pinus_sylvestris_wood_ray_section_1_beentree
Scots Pine tree rings

Uncertainties in tree-ring-based climate reconstructions probed

Current approaches to reconstructing past climate by using tree-ring data need to be improved on so that they can better take uncertainty into account, new research led out of New Zealand’s University of Otago suggests.

Tree growth rings are commonly used as climate proxies because they can be well-dated and the width of each ring is influenced by the climatic conditions of the year it grew in.

In a paper appearing in the Journal of the American Statistical Association, statistics and tree ring researchers from Otago, the US and UK examined the statistical methods and procedures commonly used to reconstruct historic climate variables from tree-ring data.

The research was led by Dr Matthew Schofield of Otago’s Department of Mathematics and Statistics. His co-authors on the paper are departmental colleague Professor Richard Barker, Professor Andrew Gelman of Columbia University, Director of the Tree Ring Lab at Columbia Professor Ed Cook, and Emeritus Professor Keith Briffa of the University of East Anglia, UK.

Dr Schofield says that their approach was to explore two areas where currently used approaches may not adequately account for these uncertainties. The first area involves the pre-processing of tree-ring data to remove non-climate related factors believed to be largely unrelated to climate effects on tree growth. Such factors include tree age, as the older a tree gets the less wide its rings tend to grow.

“This is convenient to do and the resulting tree-ring ‘chronologies’ are treated as relating to only the climate variables of interest. However, it assumes perfect removal of the non-climatic effects from the tree-ring data and ignores any uncertainty in removing this information,” Dr Schofield says.

Scots Pines (Pinus Sylvestus)
Scots Pines (Pinus Sylvestus)

The second area of uncertainty the researchers studied involves the particular modelling assumptions used in order to reconstruct climate from tree rings. Many of the assumptions are default choices, often chosen for convenience or manageability.

“This has made it difficult to evaluate how sensitive reconstructions are to alternate modelling assumptions,” he says.

To test this sensitivity, the researchers developed a unified statistical modelling approach using Bayesian inference that simultaneously accounts for non-climatic and climatic variability.

The team reconstructed summer temperature in Northern Sweden between 1496 and 1912 from ring measurements of 121 Scots Pine trees.

They found that competing models fit the Scots Pine data equally well but still led to substantially different predictions of historical temperature due to the differing assumptions underlying each model.

While the periods of relatively warmer and cooler temperatures were robust between models, the magnitude of the resulting temperatures was highly dependent on the model being used.

This suggests that there is less certainty than implied by a reconstruction developed using any one set of assumptions.

###

Since the press release didn’t link to the paper or give the title, I’ve located it and reproduced it below.

A Model-Based Approach to Climate Reconstruction Using Tree-Ring Data

http://www.tandfonline.com/doi/abs/10.1080/01621459.2015.1110524

Abstract

Quantifying long-term historical climate is fundamental to understanding recent climate change. Most instrumentally recorded climate data are only available for the past 200 years, s o proxy observations from natural archives are often considered. We describe a model-based approach to reconstructing climate defined in terms of raw tree-ring measurement data that simultaneously accounts for non-climatic and climatic variability. In this approach we specify a joint model for the tree-ring data and climate variable that we fit using Bayesian inference. We consider a range of prior densities and compare the modeling approach to current methodology using an example case of Scots pine from Torneträsk, Sweden to reconstruct growing season temperature. We describe how current approaches translate into particular model assumptions. We explore how changes to various components in the model-based approach affect the resulting reconstruction. We show that minor changes in model specification can have little effect on model fit but lead to large changes in the predictions. In particular, the periods of relatively warmer and cooler temperatures are robust between models, but the magnitude of the resulting temperatures are highly model dependent. Such sensitivity may not be apparent with traditional approaches because the underlying statistical model is often hidden or poorly described.

The full paper is here (open access, my local backup here::Schofield-cook-briffa-modeling-treering-uncertainty (PDF)) and the supplementary information is here:.uasa_a_1110524_sm7673 (PDF)

At the end of the paper they say this, which is worth reading:

Message for the paleoclimate community

We have demonstrated model-based approaches for tree-ring based reconstructions that are able to incorporate the assumptions of traditional approaches as special cases. The modeling framework allows us to relax assumptions long used out of necessity, giving flexibility to our model choices. Using the Scots pine data from Tornetr¨ask we show how modeling choices matter. Alternative models fitting the data equally well can lead to substantially different predictions. These results do not necessarily mean that existing reconstructions are incorrect. If the assumptions underlying the reconstruction is a close approximation of reality, the resulting prediction and associated uncertainty will likely be appropriate (up to the problems associated with the two-step procedures used). However, if we are unsure whether the assumptions are correct and there are other assumptions equally plausible a-priori, we will have unrecognized uncertainty in the predictions. We believe that such uncertainty should be acknowledged when using standardized data and default models. As an example consider the predictions from model mb ts con for Abisko, Sweden. If we believe the assumptions underlying model mb ts con then there is a 95% probability that summer mean temperature in 1599 was between 8.1 ◦C and 12.0 ◦C as suggested by the central credible interval (Figure 4(a)). However, if we adopt the assumptions underlying model mb ts spl pl we would believe that the summer mean temperature in 1599 may have been much colder than 8.1 ◦C with a 95% credible interval between 4.1 ◦C and 7.8 ◦C. In practice, unless the data are able to discriminate between these assumptions (which they were not able to do here as shown in Section 6), there is more uncertainty about the summer mean temperature in 1599 than that found in any one model considered. We believe that such model uncertainty needs to be recognized by the community as an important source of uncertainty associated with predictions of historical climate. The use of default methods makes evaluation of such uncertainty difficult.

Figure 4: Predictions when making different assumptions in the model-based approaches. In the diagonal plots, the black lines are the median of the posterior distribution for x mis t and the gray areas are 95% credible intervals. The off diagonal plots give the difference between the predictions in the corresponding diagonal plots. In all plots the horizontal dashed line is at 4 ◦C and the horizontal dotted line is at 0◦C.
Figure 4: Predictions when making different assumptions in the model-based approaches. In the diagonal plots, the black lines are the median of the posterior distribution for xmist and the gray areas are 95% credible intervals. The off diagonal plots give the difference between the predictions in the corresponding diagonal plots. In all plots the horizontal dashed line is at 4 ◦C and the horizontal dotted line is at 0◦C.

 

 

0 0 votes
Article Rating
186 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Ian Magness
January 27, 2016 8:04 am

This is all just nonsense – there are far too many unquantifiable variables (eg water supply, disease, invertebrate attack etc etc) affecting tree growth other than temperature.
Worse still, the people submitting these papers must know that – so how can they still publish what is, to all intents and purposes, analysis of no value whatsoever? The “models” are no more or less than a farce.

CaligulaJones
Reply to  Ian Magness
January 27, 2016 9:24 am

Well, they use many, many, MANY trees from all over to smooth out the variables….
Stop laughing, it could happen…

Mike
Reply to  Ian Magness
January 27, 2016 9:29 am

I agree. It seems that the exercise here is inch towards admitting tree-rings are not reliable temperature proxies, but with lots of handwaving and bluster about “baysian” BS about NON-climate variable rather than to address the key issue that other CLIMATE VARIABLES affect tree-ring widths and most previous studies were hopelessly flawed.
It should be remembered that it was not Briffa that turncated the ‘inconvenient’ fruth from his proxy, he published what the data showed. It was Mann and Jones that were playing “hide the decline”.

Jeff Alberts
Reply to  Mike
January 27, 2016 10:24 pm

Ed Cook, one of the authors of this paper, already admitted as much in the CRU Emails. He said it pretty emphatically, in fact, using his golf words. He basically said we know nothing from tree rings about climate variability greater than 100 years ago.

Jeff Alberts
Reply to  Mike
January 27, 2016 10:26 pm

It should be remembered that it was not Briffa that turncated the ‘inconvenient’ fruth from his proxy, he published what the data showed. It was Mann and Jones that were playing “hide the decline”.

Briffa did nothing to stop them. He also present a completely dishonest temperature series using the Yamal cores where only ONE tree out of 12 showed any kind of hockey stick. His end result essentially tossed the other 11 and emphasized the one. If he publicly admits his dishonesty, perhaps I’ll have a little respect for him.

Reply to  Mike
January 28, 2016 7:05 am

Jeff A,
Correctomundo, compadre:comment image

ferd berple
Reply to  Ian Magness
January 27, 2016 9:49 am

preprocessing the tree rings (calibration) removes information from the “Confusion Matrix” required to evaluate the underlying ability of tree rings to serve as a proxy for temperature.
http://i.stack.imgur.com/ysM0Z.png
Accuracy (ACC) =
Σ True positive + Σ True negative
———————————
Σ Total population
True calibration, as performed for true thermometers, requires that ALL non-temperature variables be controlled. Paleoclimatologists cannot control rainfall, soil conditions, sunlights, etc.
As a result they cannot separate false positives and negatives from true positives and negatives, and thus cannot calculate the accuracy of tree rings as a proxy for temperature.

ferd berple
Reply to  ferd berple
January 27, 2016 10:05 am

Case in point, the infamous hockey stick. This was built by removing all tree ring samples that did not correlate with modern thermometers during a specific period. This removed true negatives and false negatives from the confusion matrix, leaving zero’s in those columns. The remaining population, which was composed of true positives and false positives was then mistakenly believed to be all true positives.
As a result, this made the accuracy falsely appear to be 100%:
hockey stick incorrect accuracy =
(true positive + false positive)
—————————–
(true positive + false positive)

AZ1971
Reply to  ferd berple
January 27, 2016 10:05 am

And this is precisely why dendrochronology is pseudoscience. How and why legitimate scientists ever accepted Mann’s paper as “fact” is beyond me — and should be for anyone with basic understanding and critical thinking skills.

Duster
Reply to  ferd berple
January 27, 2016 11:03 am

Dendrochronology is not a “pseudo science,” Mann and Jones’ misapplication of a useful scientific tool – more than one actually, since they also misuse statistics – is the pseudoscience. It would help if folks would actually learn just what the word refers to. All dendrochronology can do is tell you when something happened, not why. That is the error and strawman that much of the “hockeystick” argument is based on. Mann et al. assumed they knew “why” tree grow varies, then proceeded from there. Then they discovered that the MWP was visible in tree rings and that lead to Mann’s famous statement regarding getting rid of the MWP. Now at least Briffa is admitting that the factors that influence tree growth are more complex than simply temperature.

richardscourtney
Reply to  ferd berple
January 27, 2016 11:03 am

AZ1971:
Dendrochronology is a scientific practice that works and is very accurate method for dating samples. The above essay does not mention it.
Dendrothermology is a pseudoscientific practice that does not work but has been adopted by ‘climate science’ as being an indicator of past temperatures. The above essay pertains to it.
It is important to not confuse the two very different practices.
Richard

richardscourtney
Reply to  ferd berple
January 27, 2016 11:31 am

ferd berple:
Actually, it is worse than you say.
Trees were selected because their recent ring thicknesses correlated with temperatures in a recent time period called the ‘calibration range’. It was assumed that the ring thickness of the selected trees are indicators of past temperatures. Therefore, ring thicknesses of those trees were used as indicators of temperatures prior to the calibration range.
However, there is no reason to accept the assumption. If the variations in ring thicknesses are random then some trees would provide ring thicknesses that correlate to temperature in a ‘calibration period’ so would be selected as temperature indicators. But different trees would be selected if a different ‘calibration period’ were used. In this case of random variations to ring thickness, the indications of past temperatures would be wrong but how wrong would be a random result.
And the real situation is worse than the described random case because it is not reasonable to assume that temperature has so large an effect that it overwhelms all the other variables which affect growth of tree rings. Several variables (e.g. territory marking by animals, growth or death of nearby light obscuring plants, etc.) may provide temporary alterations to growth of rings which overwhelm effect of temperature. In this – probably the real – case, the indications of past temperatures would be wrong and how they are wrong would depend on how local circumstances changed.
The fact that dendrochronolgy is demonstrated to work as a local dating method is clear evidence that local variables do provide temporary alterations to growth of tree rings which overwhelm effect of temperature.

Simply, the fact that dendrochronology works is clear evidence that dendrothermology does not work.

Richard

Ben of Houston
Reply to  ferd berple
January 27, 2016 12:01 pm

I’ll agree with Duster. Using tree rings as a measurement of growing seasons is wonderful. The issue comes when you misapply the data to say that it’s only concerning temperature. Mann’s habit of throwing out those that don’t calibrate during the calibration period is simply compounding these problems with the sharpshooter’s fallacy.
It’s the same things as genetics. Yes, they can say that you are related to someone else, but it doesn’t show your relationship outside of extremely narrow circumstances. Take the Jefferson case. They proved that the decedents of Sally Hemmings were Jeffersons through the Y chromosome, but there’s no way to prove with genetics which Jefferson out of the many males of the family was the father (well, given the other info, it’s unlikely, but no analogy’s perfect).

rogerknights
Reply to  ferd berple
January 27, 2016 2:17 pm

“… Mann’s famous statement regarding getting rid of the MWP”
It wasn’t Mann’s. It’s suspected to be Overpek’s. (sp?)

expat
Reply to  ferd berple
January 27, 2016 6:19 pm

For most plants temperature is not nearly as important as sunlight and snowfall/rainfall. Past a certain point increased temps either have no impact or can even slow growth.

simple-touriste
Reply to  ferd berple
January 30, 2016 4:23 pm

“How and why legitimate scientists ever accepted Mann’s paper as “fact” is beyond me”
Science nationalism?
You got to love “science” (= academia), or stop using technology!

Li D
Reply to  Ian Magness
January 28, 2016 5:07 am

Ian Magness.
Before ya go shooting off at the mouth,
and alluding to some conspirital crap,
why dont ya think about other possibilities. Maybe like people who
do science do actually know the ins
and outs of their niche, have no agenda
other than truth, and do the best they can to high standards set by others.
Dont stub ya toe on any isotope ratios
in your haste to indulge in conspiracies eh.

MarkW
Reply to  Li D
January 28, 2016 6:36 am

Li D.
Before ya go shooting off at the mouth,
perhaps you should consider the possibility that those of us here have been examining the so called data for years and have reached the conclusion that there is no science behind it.

timg56
Reply to  Ian Magness
January 28, 2016 1:19 pm

I would not go so far as to say it is a farce. I believe the concept is viable enough to justify research in the field. And this represents how science should happen. One thinks studying tree rings might act as a temperature proxy, if they can screen out the other (primary) factors contributing to tree growth. Results show some correlation, but the data set is not robust and other issues arise. It is not the fault of dendro researchers if people like Mann misuse their research. As part of the on going research, they look at the uncertainty issue and report that while tree rings still show some correlation, what can be derived from the results varies considerably with the underlying assumptions used to tease out the information.
Unless I misunderstand what this paper is reporting, isn’t that the way it is supposed to work?
Or said another way, quit with knee jerk responses.

Robert
Reply to  Ian Magness
January 29, 2016 5:28 pm

Yes. Based on observation and my kids 8th grade science project in dendrochronology, water (including soil moisture from snow melt) regulates growth in temperate climates, not temperature.

bh2
Reply to  Ian Magness
January 30, 2016 9:06 pm

Because it pays well. And what’s to stop them?

marlene
January 27, 2016 8:07 am

LOL. Must have been all those pre-historic coal mines and tractor trailers….

January 27, 2016 8:10 am

How often does it have to be pointed out to these clowns that tree rings are dependant on other factors like rainfall more than temperature.

Reply to  jbenton2013
January 27, 2016 8:16 am

and beyond summer temps and rainfall: seasonal cloud cover, late spring freezes, early fall frosts, winter snowpack cover retention into the spring affecting soil moisture….

CaligulaJones
Reply to  Joel O’Bryan
January 27, 2016 9:27 am

And aren’t very good as measuring anything where they don’t grow: oceans, arctic, mountains, deserts…and they only measure when they grow (i.e., spring and summer for most of them). During the day.
So, for a proxy that measures how some plants grow some times in a (relatively) small part of the planet, they are PERFECT.

MarkW
Reply to  Joel O’Bryan
January 27, 2016 9:34 am

They also are no good at measuring anything when the trees themselves aren’t growing. IE, when they are dormant.

urederra
Reply to  jbenton2013
January 27, 2016 10:44 am

And now that there is more CO2 in the atmosphere, they grow thicker because of that.

January 27, 2016 8:12 am

Any scientist who claims that tree rings can make reliable paleo-thermometers is simply lying. Junk science in action, but we knew that from the “hide the decline” Nature trick. But with so many climate researchers’ reputations are stake and so much grant money to be gained from politically convenient findings, the lying will continue.

CaligulaJones
Reply to  Joel O’Bryan
January 27, 2016 9:29 am

I really wish I could find the article in a major history magazine where a warmunist professor admits that they are, at best, accurate to within 2 degrees F. He states that he didn’t want to even admit that for fear that this, um, FACT, would be used for nefarious purposes by those he calls “deniers”.
As Cheryl Wheeler says, “don’t deal me aces if you don’t want me to play them”…

January 27, 2016 8:16 am

If you don’t like the data, just do what the early warmists did with the other set of Russian Tree Ring Data that didn’t support the HIGHpothisis… throw it out.

emsnews
January 27, 2016 8:27 am

Tree rings are all about mainly RAIN amounts, not heat. If it is warm and wet, the trees grow. If it is HOT AND DRY the rings are small.
Using rings to tell temperature is crazy. Droughts, by the way, can easily come with cold. Very cold winters, for example, have less snow than warm winters and the trees in northern climes grow less in those years if the summer is dry, too. If the summer is wet, the rings are big.
And I do know a lot about this, I own and harvest my own forest and spend a lot of time examining trees that we fell.

Reply to  emsnews
January 27, 2016 10:22 am

And if it’s If it is HOT AND WET the rings are?

ferd berple
Reply to  garyh845
January 27, 2016 2:16 pm

If it is HOT AND WET
===============
excited

Reply to  garyh845
January 27, 2016 3:12 pm

If it’s HOT AND WET, you get wood.
Sorry, couldn’t resist.

MRW
Reply to  garyh845
January 28, 2016 3:45 pm

Brian Epps, I’m stealing that one, and I’m not crediting you. ROTFLMAO.

Mark from the Midwest
January 27, 2016 8:28 am

Notice that most of the people who are debating the nuance of tree-ring data are statisticians or in the varied environmental and climatic studies, (not sciences), You don’t see well-regarded botanists involved in any of this nonsense.

emsnews
Reply to  Mark from the Midwest
January 27, 2016 8:32 am

Even more infuriating! If one wishes to know about drought cycles, tree rings are fantastic. The earliest tree ring studies that I am aware of are from Arizona, when botanists there noticed there was a tremendously long drought that led to the downfall of the farming natives in Arizona and New Mexico leaving abandoned ruins. Comparing tree rings from the dwellings with tree rings from trees that continued growing after the cliff dwellings were abandoned, it showed a very long drought.
This all was years and years and years ago, nearly 100, when science was eager to be realistic.

jorgekafkazar
Reply to  Mark from the Midwest
January 27, 2016 1:52 pm

“Any science with an adjective in front of it isn’t Science.” –attributed to Feynman
I guess that’s even more true of “studies.” I recall reading the curriculum for one college’s early “Environmental Studies” program. There were no extradepartmental science courses at all in the curriculum, though electives in actual science were an option.
After all these years, why is there still no “Studies Studies” program?

MarkW
Reply to  jorgekafkazar
January 27, 2016 2:31 pm

exciting science
boring science

Keith Willshaw
Reply to  Mark from the Midwest
January 28, 2016 12:21 am

Actually they do, in fact the whole science of dendrochronology is based on work done by botanists who first noticed the difference in tree rings from year to year and set out to understand why this was so. Julius Ratzeburg was the well know German botanist who studied patterns of growth and was able to measure forest health through preceding centuries. Modern botanists still use these techniques to better understand the factors that produce health tree growth and patterns of disease.
That said what these professionals understand is that tree ring patterns give a general indication of what were good and bad years, figuring out WHY this was so is a different matter. As this paper points out rather obliquely the default assumption by most climate scientists is that this is purely a function of temperature whereas other factors such as drought and insect infestations can be more important.

timg56
Reply to  Mark from the Midwest
January 28, 2016 1:34 pm

Mark,
I’ve been involved in science education for >20 years. Mostly aquatic and terrestrial ecology outdoors in the Pacific NW. Yes, it is true that you ask any forester or tree biologist what the primary factors are that effect tree growth, they will tell you that water, soil nutrients, sunshine all have far more impact than temperature. However that does not exclude the possibility of trying to find examples where temperature could be a significant factor.
Personally, while I don’t think it is rubbish, I highly doubt tree rings can provide any information accurate enough to proxy thermometers. They might work for trending – possibly identifying warming and cooling, but not what they have been used for. This latest paper appears to confirm that.

Steve (Paris)
January 27, 2016 8:36 am

But the science is settled.

Hugs
Reply to  Steve (Paris)
January 27, 2016 9:05 am

Sure. Image: Pinus sylvestris L.
Anthony or Mods may fix.

Russell
January 27, 2016 8:36 am

If you don’t like the hypothesis throw it out. Go to the cartoon starts at the 26 minute mark.https://www.youtube.com/watch?v=vRe9z32NZHY

mebbe
January 27, 2016 8:36 am

Getting real picky, here, but the image is actually one of a slash-sawn pine plank, rather than a transect of the stem revealing the concentric growth rings. The tree is pinus sylvestris.

Christopher Paino
January 27, 2016 8:41 am

In the music world, analog recordings pretty much have a continuous stream of data, with no missing bits. One continuous stream. When analog music is transferred, or recorded directly in the digital realm, the stream of data is no longer continuous, but has tiny pieced of information that are missing because the digital stream is not, and can never be, continuous like an analog stream. The higher resolution you go when digitally recording, the smaller the missing bits are, but there will always be missing bits.
I make this unscientific, completely lay analogy because I feel that in an analog world, digital modeling, no matter how high the resolution, there will always be missing bits in the model that can, and will eventually (given enough time, anything that can happen, will), have an effect that can’t be accounted for in the digital models.
I probably explained my thoughts terribly, but that’s how I roll! I just have this nagging thing in the back of my mind that there is something that digital modeling can never ever capture, and it makes putting trust in models very dangerous and not very smart in my book.

paqyfelyc
Reply to  Christopher Paino
January 27, 2016 10:14 am

bad analogy.
we have a mathematical theorem that states thats their is NO loss of information is the numerical sampling is double of the higher frequency used in the analog recording.
The trouble with models is much more in what they ADD than in what they fail to account for.
Pretty much like a loudspeaker can add is own noise atop the sound he is supposed to transmit.
And when you think nobody heard that sound, nobody can compare the loudspeaker sound to the real one …
i wonder if we could use tree rings to hear the sound of the forest they grew … Just joking

Christopher Paino
Reply to  paqyfelyc
January 27, 2016 10:45 am

I’m not sure what that mathematical theorem refers to, but there is always loss of information when going from analog to digital. It is intrinsic to the system. Analog sound is continuous. digital sound, even though it might sound like it, and have a high bitrate, but it can never be continuous. No matter how small the gap between the numbers, there is always a gap. Not so in the analog world. Maybe I’m just not understanding what you mean.

Christopher Paino
Reply to  paqyfelyc
January 27, 2016 10:58 am

I went lookin’ and think your talking about the Nyquist-Shannon sampling theorem, and I get that. While that probably allows for “perfect” fidelity, it still does not mean that there isn’t missing information. A digital signal just can’t capture all the information because, well, it’s digital and there is always a gap between digits no matter how small they are.
When comparing high definition digital audio to vinyl, I don’t necessarily hear the difference, but feel it. It’s like the difference between a Stradivarius and not-a-Stradivarius. The devil is in the details. The nuances. Teeny-tiny but very important, and in digital recording, they are lost. How sensitive an individual is to them, is another story.

Raven
Reply to  paqyfelyc
January 27, 2016 10:43 pm

The difficulty here is comparing the S/N ratio of music to that of tree rings. Music, whether digital or analogue will sound “pretty much” the same in this context because it’s orders of magnitude higher.
If we extracted the tree ring signal from the noise, would we tap our feet to it?
More to the point, would the tree ring signal with ‘temp. noise’ extracted sound the same as the the tree ring signal with ‘rainfall noise’ extracted?
Or perhaps, if we laid down a percussion track behind just the noise value, would it also sound like music. 😉

D. J. Hawkins
Reply to  paqyfelyc
January 28, 2016 9:56 am

@Christopher
Don’t forget, there’s analog, and then there’s analog. If I hook up my Dad’s ancient tube amplifier to the turntable, do I get “true” sound out of the speakers? Suppose I have cheap speakers, and they can’t follow the highs and lows in the source material? This is a form of signal loss, not unlike what may happen when an audio source is digitized. This is why digital recordings for your listening pleasure happen at 44K per channel; its about twice the highest frequency the human ear can resolve. Assuming perfect reproduction from medium to speaker, you could not tell the difference between a 44K sampling rate and an 88K sampling rate, the intricacies of harmonics being ignored for the moment.

Walt D.
January 27, 2016 8:43 am

The problem with this type of proxy is that it is local. When used it is used as a proxy for global temperatures even though there are no samples in the oceans or even the amazon rain forest.
Proxies also seem to be cherry picked. The growing of Spanish grape vines in Britain in Roman and Medieval times is a proxy that shows that the climate was warm enough for these grapes to grow. However, people come up with other proxies that show that the Medieval Warm Period did not happen.
Either way, there is always a big problem when you create a time series using data of different accuracy.

Duncan
Reply to  Walt D.
January 27, 2016 9:33 am

Agreed. My experience with tree rings, when the tree is young, rings are large (fast growth) but as the tree approaches maturity the rings become tighter. Alternatively, if the tree is growing in the shadows (forest) from other larger trees, early growth will be slow until the tree is large enough to break through the canopy, then growth speeds up with more sunlight. This among may other variables. Ultimately proxies become fashionable only when the outcome fits the narrative. If it does not fit the narrative, obviously it cannot be a good proxy. It also helps to use a proxies that can have infinite number of corrections done to them based on the authors ‘scientific opinion’. Hence we get tree rings.

MarkW
Reply to  Duncan
January 27, 2016 9:39 am

Alternatively, if a nearby tree falls, then the increased sunlight will cause an increase in growth.
Without knowing the history of a particular tree, all reconstructions are little more than guess work.

PiperPaul
Reply to  Duncan
January 27, 2016 11:23 am

all reconstructions are little more than guess work
But good enough for some climate “scientists”, evidently.

timg56
Reply to  Duncan
January 28, 2016 1:39 pm

“all reconstructions are little more than guess work.”
Isn’t most science little more than guess work? Followed by tedious testing of your guesses? As I noted above, this paper looks like it fits into the testing part of the process.

Tom Halla
January 27, 2016 9:07 am

It does seem rather deceptive to graph a mean, rather than graphing a band i.e. a line rather than a band giving the uncertainty interval.

January 27, 2016 9:08 am

Isn’t the point of the article that there is so much uncertainty within and between sites (and data sets) that any definitive temperature measurement makes little sense. As I read it, they have just destroyed the hockey stick or at the least dramatically increased the uncertainty range of any particular proxy.
All the other issues people have mentioned are still true, but even in a best case scenario that these other factors are minimal the measurement uncertainties are huge and may not be uniform through time.

Reply to  bernie1815
January 27, 2016 10:10 am

Yes.

AZ1971
Reply to  bernie1815
January 27, 2016 10:17 am

Of course they destroyed Mann’s hockey stick, if for no other reason than they used ten times the data of blue oaks from the Yamal Peninsula that Mann did. More data = more accuracy.
That the range “in 1599 was between 8.1°C and 12.0°C as suggested by the central credible interval” is more than four times the “measured” global temperature increase since 1850 of 0,85°C. Wow indeed. And here we are, about to be bludgeoned over the head by NOAA telling us just how hot 2015 was, and why we need to destroy capitalism, impose draconian carbon taxes, and ensure all of humanity lives like medieval paupers to save the planet.

Bill Partin
Reply to  AZ1971
January 28, 2016 2:07 am

….”ensure all of humanity lives like medieval paupers to save the planet.”
Except the Elites, of course.

Ellen
January 27, 2016 9:14 am

I hear that the higher CO2 concentration these days is goosing tree growth. This is a factor, if neglected in tree ring measurements, that can lead to a hockey stick …

RACookPE1978
Editor
Reply to  Ellen
January 27, 2016 10:14 am

Ellen

I hear that the higher CO2 concentration these days is goosing tree growth. This is a factor, if neglected in tree ring measurements, that can lead to a hockey stick …

Yes, your observation is correct.
NO ONE in the academic-CAGW tree-thermometer industry (They manufacture white academic papers from green government research grants) has ever published their CO2 correction factors for tree growth since the 1940-1975 cooling period (when CO2 rose measurably but global average temperatures declined) and the 1976-1998 warming period (when global average temperatures rose fractionally while CO2 levels rose significantly.
Unlike all other external environmental effects that affect tree thermometer results (nearby sunshine from falling or grwoing trees), soil chemistry and soil moisture, fires, insects, drought, snow or wind, etc.) CO2 at least is a common growth spurt across all areas of the world, and has changed significantly across the instrument record. (Regardless of WHY CO2 has risen across the temperature record, you cannot claim it has not risen.)
Further, CO2 IS a specific and common influence that does not vary from season to season like weather or rainfall, nor from tree to tree across in a single area, but IS only a variable in each forest area across long decades.

indefatigablefrog
January 27, 2016 9:24 am

Claim – “The examination of tree rings from former time allows us to quantify temperature variation in former times.”
Test 1 – “Does the same methodology when applied to trees in the modern era give a good correspondence to the known temperatures at that site, measured via modern instrumentation”.
Result from test 1 – NEGATIVE – no, it mysterious stopped working in the modern era.
Test 2 – “Do the results from tree based reconstructions correspond to patterns of climate history documented widely in written historical records”.
Result from test 2 – NEGATIVE – no they seem to tell us that there was no significant climate variation during periods when historical records of that time appear to indicate quite notable variation.
Houston, we have a problem…Does anybody have a means of testing this methodology which would not lead a reasonable person to be skeptical of whether it has a significant advantage over the skilled interpretation of tarot cards?
And as for Briffa he has been discussing the “limitations” and need for improvement for 20 years.
As seen in this 1998 paper Which concludes:
“While this clearly represents a problem in interpretation it also provides lucrative opportunities for disentangling different tea-leaf signals”
(Apologies, “lucrative” should read “challenging” and I seem to have accidentally typed “tea leaf” instead of “tree growth”. My bad.)
Well, as we could have all said back in 1998 – let the disentangling commence…
http://eas8001.eas.gatech.edu/papers/Briffa_et_al_PTRS_98.pdf

paqyfelyc
Reply to  indefatigablefrog
January 27, 2016 10:14 am

+1

January 27, 2016 9:27 am

As soon as I read about their modeling, the study became a septic tank.

KRM
Reply to  kokoda
January 27, 2016 10:48 am

Well you should have kept on reading. This is a statistical critique of previous interpretations of tree rings and it found results varied depending on the assumptions and the methods used. More of this sort of investigation should be welcomed.

Brandon Gates
Reply to  KRM
January 27, 2016 12:11 pm

+1

ferd berple
Reply to  KRM
January 27, 2016 2:42 pm

results varied depending on the assumptions and the methods used.
====================
thus one can vary the assumptions and methods until the desired results are obtained.
in other words, you can “prove” anything with tree rings. up is down, black is white, hot makes cold, cold makes hot.
which is precisely what makes dendrothermology so popular in climate science.

timg56
Reply to  KRM
January 28, 2016 1:46 pm

ferd,
Still, shouldn’t the primary takeaway from this paper be that they are finally doing it right? As so as they stated that results showed considerable variability depending on underlying assumptions, they basically confirmed your point.
I honestly don’t think dendro researchers need to jump up and shout, “OMG, we were retarded to think we could get thermometer quality data from tree rings.” We all can think that, but just acknowledging the considerable issues associated with the field is pretty good.

Brian Jones
January 27, 2016 9:30 am

I would like someone/anyone actually take a tree from an area that we have good historical temperature data for and do some analysis to see if any of this makes sense for say 200 years. If and only then should
we try and extrapolate over centuries.

MarkW
January 27, 2016 9:33 am

“They found that competing models fit the Scots Pine data equally well but still led to substantially different predictions of historical temperature due to the differing assumptions underlying each model.”
That alone should have been enough to invalidate the use of tree rings for climate reconstruction.
If multiple “assumptions” manage to fit historical data equally well, then there is no reason to select one set of assumptions over another.
Ergo, until you can reach the point where only one set of assumptions fits your data, you are just hand waving.

Russell
January 27, 2016 9:42 am

This is what the Climate Scientist Mann et al are doing: https://www.youtube.com/watch?v=1c8XLJ9MEhk

January 27, 2016 9:44 am

Harvard’s President Lowell supposedly said in 1909 that statistics “… Like veal pies, are good if you know the person that made them, and are sure of the ingredients.” True here. We know the persons are cooking away the MWP and LIA. And are sure treemometer ingredients are off. This new paper is the equivalent of statistical food poisoning.
Or, as Twain said in his Chapters from my Autobiography (1906) “there are three kinds of lies: lies, damned lied, and statistics.” he could not have imagined the fourth we now have, Mannian statistics.

HAS
Reply to  ristvan
January 27, 2016 12:17 pm

ristvan, this a lot better contribution to the literature than many I’ve seen. It basically starts to apply standard statistical analysis to the problem at hand. If you read it closely you will find it is not only saying the techniques used historically are wrong (and deals with a number of the issues raised recently at BH when discussion Wilson et al.) it also says that even this more general approach to modelling temp from tree rings fails standard statistical tests using out of sample predictions (the ones that count). They don’t do the analysis, but eyeballing Table 5 it looks as though the models have any real ability to discriminate temps, apart from estimating the broad average across the instrumental period.

HAS
Reply to  HAS
January 27, 2016 12:18 pm

sorry “don’t have any real ability …”

Reply to  HAS
January 27, 2016 2:30 pm

I am typing challenged also. To your comment. 1. As you note, they did not do the out of sample analysis. They could have. 2. They said their paper does not mean previous studies were wrong. But if multiple stat models fit equally well, and give different answers, at least they should have said ‘are not validated by our analysis’. Strong form of uncertainty, not weak form. Veal pies and all that.

HAS
Reply to  HAS
January 27, 2016 3:02 pm

ristvan, they do do an out of sample analysis and that shows basically that the 95% confidence limits for any predicted temps covers the full range of actual temps over the instrumental period. Therefore the models are of doubtful value. When I said “they don’t do the analysis” I meant they present the results of the out of sample test, but don’t draw the conclusion.
On the current models they are simply being careful and saying that they can’t reject the current approach using the broader approach. This however begs the question of how other investigators selection of techniques would fare.

Manfred
January 27, 2016 9:52 am

Another superb model to exploit for research funding ad nauseam.

Steve
January 27, 2016 10:14 am

Funny how all this works out….If my memory serves me correctly (which it usually doesn’t), both Briffa and Cook had expressed “dismay” as to Mann’s shenanigans in some of the climate-gate e-mails. Took awhile, but it appears they are distancing themselves from the splice and dice Mannian methods. Of course, following academic protocol, they can’t confront this directly by using Manns work as an example of where things can and did go wrong, but I clearly get the inference. While this paper seems somewhat courageous considering the point they are making, there is of course the cowardly obligatory escape clause..
“These results do not necessarily mean that existing reconstructions are incorrect.”
If their paper doesn’t impact, or give any insight into previous dendro work, why even write a paper on the subject. On the other hand, if this study gives us some unique insight into possible increases in the uncertainty of some important, heavily cited and relied upon dendro work well, just come out with it guys and tell us when and where this happened. Then we can understand the impact of what you are talking about. I’ve seen this movie before, and this is obviously at the point before they went to see the Wizard. Only after they see the “MANN behind the curtain” does the lion get his courage back.
Why do only the crappy scientists have a pair of balls ?

MarkW
Reply to  Steve
January 27, 2016 2:37 pm

They are correct that this doesn’t prove that Mann’s paper is wrong. However it does move Mann’s paper back into the unproven category.
There are multiple “models” that fit the data. Mann chose the one that pleased him the most. It may have been the right choice, it may have not been. Without more data any choice is no better than random guessing. So if there are 3 models that fit, then each model has a 1/3rd chance of being right. Of course we also don’t really know how many models would fit the data since there are an infinite number of possible models and only so much time to test each one.

ferd berple
Reply to  Steve
January 27, 2016 2:46 pm

“These results do not necessarily mean that existing reconstructions are incorrect.”
==============
the could be correct by accident.

Robert B
Reply to  ferd berple
January 29, 2016 12:12 am

“That is not only not right, it is not even wrong.” Attributed to Wolfgang Pauli

Chic Bowdrie
January 27, 2016 10:44 am

“Alternative models fitting the data equally well can lead to substantially different predictions.”
I think this is the most significant sentence in the whole article. Phenomenological/statististical models only correlate and do not indicate causation.
OTOH, the article’s emphasis on uncertainties is a positive step forward for climate science.

Brandon Gates
Reply to  Chic Bowdrie
January 27, 2016 12:25 pm

Chic Bowdrie,

OTOH, the article’s emphasis on uncertainties is a positive step forward for climate science.

I’d say it’s certainly an important step for the field of dendroclimatology, and allow that it might even be overdue. I don’t agree that climatology has a general problem with over-certainty.

MarkW
Reply to  Brandon Gates
January 27, 2016 2:38 pm

” I don’t agree that climatology has a general problem with over-certainty.”
You’re the only one.

Chic Bowdrie
Reply to  Brandon Gates
January 27, 2016 3:33 pm

Hi, Brandon.
At the outset, I’m going to admit no expertise in uncertainty measurements. But I’m happy to learn. How certain are you that GISS and Hadcrut accurately measure an average global temperatures? How certain are you that satellites don’t more accurately reflect global temperature based on troposphere measurements?
Example #1 of climate science over-certainty comes from a 18 Jan 1016 paper by Glecker et al. in Nature Climate Change. The paper and Seth’s Borenstein’s review of it was discussed by David Middleton here at WUWT: http://wattsupwiththat.com/2016/01/19/seth-borenstein-man-made-heat-put-in-oceans-has-doubled-since-1997/ . Basically they claim 150 zettajoules of energy absorbed by the oceans prior to this century and another 150 zettajoules of “man-made heat has been buried in the oceans in the past 150 years,” according to Borenstein. Interestingly this exact bit of hyperbole was predicted by Bob Tisdale back in 2010 (See the reference on Figure 6 in the Middleton post). How can sparse measurements of the vast ocean accurately predict a warming of 0.2K and how is 90% of it claimed to be man-made with a straight face?
Example #2 of climate science over-certainty comes from an older paper by GL Stephens et al. 2013 where they estimate energy imbalance as 0.6 +/- 0.4 W/m2. How do they get that accuracy and/or precision based on numbers for outgoing radiation reported as 100.0 +/- 2 for SW and 239.7 +/- 3.3 for LW? This is to balance the incoming 340.2 +/- 0.1 W/m2 radiation coming in. I call that being over certain.
The main reason I think climatology over does certainty comes from IPCC language like 95% certain, etc. and estimates of climate sensitivity which are all over the place vis-a-vis all the claims humans are primarily responsible for global warming without definitive evidence to back them up.

Brandon Gates
Reply to  Brandon Gates
January 27, 2016 8:02 pm

Chic Bowdrie,

At the outset, I’m going to admit no expertise in uncertainty measurements. But I’m happy to learn.

This is why I like talking to you. I’m not expert either, hence I doubt I have much to teach you. I think I’m much better at pointing out flaws in arguments, and unfortunately that’s what I mostly do in this post.

How certain are you that GISS and Hadcrut accurately measure an average global temperatures? How certain are you that satellites don’t more accurately reflect global temperature based on troposphere measurements?

The short answer is that I have more confidence in the surface record than the satellites. Long answer is long, but contains why I think so.
I have no other way of quantifying this than by appealing to what the data providers themselves have to say about their own products. This is a little difficult because both RSS and UAH quantify uncertainty in terms of the linear trend over their entire recordset, whereas Hadley CRU and GISS only give it in terms of mean monthly or annual uncertainty. Hadley provides an error estimate for every month and year, whereas AFAIK GISS does not — they give us a range from past to present, with the older data being less uncertain than recent data.
Both Hadley and RSS do something nobody else does; they generate multiple realizations of their temperature estimates and then average them to produce an ensemble mean estimate as a final product. Such an ensemble method provides ways of estimating uncertainty and comparing them to each other in an apples to apples way that can’t be done with data from any other provider. Kevin Cowtan took the ensembles from RSS TLT and HADCRUT4 and did such an analysis. He estimated that the RSS TLT trend uncertainty between 1979-2012 was +/-0.015 K/decade, whereas the for HADCRUT4 it’s +/-0.003 K/decade at the 95% confidence level, or about 5 times less uncertain.
I woudn’t say that I’m 5 times more confident in HADCRUT4 than RSS TLT. I can say that since the central estimate for HADCRUT4 lies within the 95% CI of RSS TLT, and that HADCRUT4 shows the higher trend that the RSS TLT central trend estimate is likely lower than it should be with HADCRUT4 only possibly being higher than it should be … instead of the other way ’round.
References for Dr. Cowtan’s analysis begin with my comments on this recent thread here at WUWT: http://wattsupwiththat.com/2016/01/18/monday-mirth-old-reliable/#comment-2122673
My confidence is very low that any atmospheric temperature record tells us much about how much absorbed solar energy is being retained/lost by the system as the oceans are by far the largest heat sink in system. If I had to pick one single indicator already available to us, it would be the upper two kilometers of the oceans:
http://climexp.knmi.nl/data/itemp2000_global.png
Question then becomes how uncertain are those estimates?
Something to keep in mind about me here, I don’t constrain myself to using the single “best” metric from the single “best” data source. No non-trivial scientific finding I’m aware of (think the Standard Model of physics, theory of plate tectonics, etc.) has ever relied on a single line of evidence to establish its claims. Climate science is no different, and relies on consilience of evidence to make stronger conclusions than could be made by appealing to only the “best” one.

How can sparse measurements of the vast ocean accurately predict a warming of 0.2K and how is 90% of it claimed to be man-made with a straight face?

Sparse measurements have been the bane of human knowledge since we developed the ability to actually take them. Obviously such measurements cannot be taken with zero uncertainty, so we developed statistical methods for estimating it. The simplest rule anyone who’s taken basic stats is that standard error of the mean varies as the ratio of the sample standard deviation over the square root of the total number of observations. It gets more complex when we’re making observations across both time and two- or three-dimensional space, but again these are not problems unique to climate. The same difficulties apply to mobile satellites, radiosondes and ocean bouys used for 3D estimates over time as they do to surface-based thermometers which mostly sit in the same spot for years on end which are used for 2D estimates over time.
You’re right to be skeptical, but to be credibly skeptical you need to be consistent in your skepticism. I frankly don’t think you are, because basically you’ve just called into question ANY scientific conclusion based on sparse estimates and/or wonky instrumentation — which all field instruments are to some degree.

Example #2 of climate science over-certainty comes from an older paper by GL Stephens et al. 2013 where they estimate energy imbalance as 0.6 +/- 0.4 W/m2. How do they get that accuracy and/or precision based on numbers for outgoing radiation reported as 100.0 +/- 2 for SW and 239.7 +/- 3.3 for LW? This is to balance the incoming 340.2 +/- 0.1 W/m2 radiation coming in.

What does the paper itself say?

I call that being over certain.

Your opinion is noted. Now tell me why I should hold your opinion in higher esteem than that of domain experts who have actually done the work?
For me the credibility for the research comes from the very fact that they list the identifiable sources of uncertainty, the various assumptions they make to derive their estimates, and allow for possible other sources of uncertainty they haven’t been able to identify. I can scan the list of references and/or read some of them and note that other methods done by other teams arrived at similar conclusions. I can do my own analysis. For instance I get an imbalance of 0.69 W/m2 from 1948-present using the NCEP/NCAR reanalysis data for latent heat, sensible heat, long- and short-wave fluxes at the surface, which is well inside their +/- 0.4 W/m2 uncertainty estimate.
Here you’re also making the implicit argument that 0.4 W/m2 is a teensy number compared to the magnitudes of the fluxes involved, 340.2 +/- 0.1 W/m2 for incoming SW, 100.0 +/- 2 for outgoing SW and 239.7 +/- 3.3 for outgoing LW. I don’t see why that should be such an unreasonable argument on the face of it, why shouldn’t a well-designed modern detector be able to return an accurate reading from 0 to several thousands of W/m2 at a near-constant precision across that entire range? For sake of consistency, are you not also claiming that orbiting microwave sounding units have enough absolute accuracy and precision to derive tropospheric temperature to their stated level of precision? How is it that RSS and UAH uncertainty claims are credible, but Stephens et al. 2013 claims which rely on orbiting instruments are not?
Finally let’s look at something here: 0.6 +/- 0.4 W/m2
Note that the estimated uncertainty is 66% of the central estimate. I don’t know where the point of ridiculously over-certain begins by this somewhat dubious method, but I daresay it’s a smaller number than 2/3rds.

The main reason I think climatology over does certainty comes from IPCC language like 95% certain, etc. and estimates of climate sensitivity which are all over the place vis-a-vis all the claims humans are primarily responsible for global warming without definitive evidence to back them up.

What do the IPCC say they’re 95% certain of?
Isn’t the fact that IPCC-published estimates of climate sensitivity to CO2 range from 1.5-4.5 K/2xCO2 an contra-indicator of over-certainty? Measurement error propagates, right? Models don’t agree with each other. Many climate sensitivity estimates come from paleo evidence, huge error bars, and as this very OP points out, some of which is likely underestimated. Isn’t the it the honest thing for the IPCC to do to publish a range large enough to drive a city bus through sideways if that’s what the sum total of the original research indicates as range of estimates?
Isn’t “definitive” evidence the same as saying evidence with an estimated uncertainty of zero? Isn’t such a small error estimate something you are claiming is ridiculous?

Brandon Gates
Reply to  Brandon Gates
January 27, 2016 8:05 pm

MarkW,

You’re the only one.

And here I was thinking proper “skeptics” don’t do consensus.

Reply to  Brandon Gates
January 27, 2016 8:06 pm

comment image

MarkW
Reply to  Brandon Gates
January 28, 2016 6:39 am

Brandon, in your case I’m willing to make a exception.

Chic Bowdrie
Reply to  Brandon Gates
January 28, 2016 8:08 am

Brandon,
“Hadley provides an error estimate for every month and year, whereas AFAIK GISS does not — they give us a range from past to present, with the older data being less uncertain than recent data.”
Older data being less uncertain than recent data? That says to me there’s something terribly wrong with GISS data. Are there less measurements today? Is the instrumentation less accurate today?
“Kevin Cowtan … estimated that the RSS TLT trend uncertainty between 1979-2012 was +/-0.015 K/decade, whereas the for HADCRUT4 it’s +/-0.003 K/decade at the 95% confidence level, or about 5 times less uncertain.”
Trend uncertainty? Seems to me trend analysis is a function of all the data points, not the accuracy of the individual points. IOW, if Hadcrut measured less extreme values compared to the RSS values relative to the true values, wouldn’t the trend stats show less error for the Hadcrut trend even though the RSS trend was more accurate? The fact that one trend is within the confidence limits of the other says nothing about the certainty of the individual data points. However, this does illustrate how to overdo certainty statements.
Your justification for the validity of sparse measurements is weak. Not all scientific conclusions are based on sparse estimates and/or wonky instrumentation. You can’t go to the moon using sparse measurements and wonky instrumentation. The certainty of a scientific conclusion is inversely proportional to the scarcity of estimates and wonky instrumentation.
“What does the [Stephens et al.] paper itself say?”
How did you calculate your 0.69 W/m2 imbalance? Their incoming radiation was very precise: +/- 0.1 W/m2. But their outgoing radiation is the sum of two amounts with errors summing greater than 10 times the error reported for the total imbalance. How do you get +/- 0.4 from numbers known to only +/- 2 and 3? Based on their numbers, the imbalance could have been -5 W/m2.
“How is it that RSS and UAH uncertainty claims are credible, but Stephens et al. 2013 claims which rely on orbiting instruments are not?”
I did not make any evaluation of RSS/UAH uncertainty claims. Nevertheless, I expect variability associated with a specific location and short time interval to be more precise than the variability associated with data points averaged over wide areas and longer time intervals.
“What do the IPCC say they’re 95% certain of?”
Each IPCC report increased the level of confidence that humans are responsible for more than half of global warming. Yet there has been no refinement on the estimates of CO2 sensitivity and climate models overestimate global temperatures. If sensitivity is low, how can humans be blamed for the warming? That’s over-certainty.
“Isn’t “definitive” evidence the same as saying evidence with an estimated uncertainty of zero?”
No. Definitive evidence is data that shows an incremental increase in CO2 has a certain influence on global temperature and that humans have a specific contribution to that CO2 increase.

timg56
Reply to  Brandon Gates
January 28, 2016 1:52 pm

Then you haven’t been paying attention Brandon.
Or you are simply using your blind eye so you don’t have to admit to anything.

Chic Bowdrie
Reply to  Brandon Gates
January 28, 2016 1:53 pm

Brandon,

Both Hadley and RSS do something nobody else does; they generate multiple realizations of their temperature estimates and then average them to produce an ensemble mean estimate as a final product. Such an ensemble method provides ways of estimating uncertainty and comparing them to each other in an apples to apples way that can’t be done with data from any other provider.

I took a look at your SKS link to the post by Kevin Cowtan. He does a good job of explaining the difference between statistical and structural uncertainty. Unfortunately, in his comparison of the RSS and HadCRUT4 ensembles, he doesn’t control for statistical uncertainty. It looks like RSS data has more statistical uncertainty, or in Cowtan’s terms, more wiggles. Do the wiggles reflect less accurate data? Not necessarily.
I would be more impressed with an analysis that not only controlled for the statistical uncertainty, but used ensembles produced using the same paradigm. Otherwise, the analysis is still comparing the proverbial apples and oranges. Obviously it’s problematic when the target measurements are different, but the criteria Mears et al. used to produce ensembles may have exaggerated the degree of wiggling compared to that used by Morice et al.
An explanation of the ensemble process may have tempered to your debate with richardscourtney on the Monday mirth WUWT post you referred to earlier. Even Dr. Cowtan pointed out in Figure 3 that the HadCRUT4 ensembles omit some sources of uncertainty. It is a bit disingenuous to leave out that last sentence of the caption on the figures you posted.

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 5:26 pm

Chic,

I took a look at your SKS link to the post by Kevin Cowtan. He does a good job of explaining the difference between statistical and structural uncertainty.

Oh good, saves me from trying to explain that, which I was having some difficulty doing.

Unfortunately, in his comparison of the RSS and HadCRUT4 ensembles, he doesn’t control for statistical uncertainty. It looks like RSS data has more statistical uncertainty, or in Cowtan’s terms, more wiggles. Do the wiggles reflect less accurate data? Not necessarily.

Exactly correct according to my understanding. We actually expect the troposphere to vary more than the surface, which we’d reasonably expect to make any tropospheric measurement by any means “look” more uncertain than they actually are. Perhaps one way to control for statistical uncertainty in that case would be to take the ratio of the error estimates against the variability.

I would be more impressed with an analysis that not only controlled for the statistical uncertainty, but used ensembles produced using the same paradigm. Otherwise, the analysis is still comparing the proverbial apples and oranges.

We’re kind of doing that anyway, which is something that gives me a persistent headache when discussing surface v. satellites. If I were a PhD candidate in stats, I’d put this on the list of thesis ideas. It does irk me that there isn’t a “standard” way to compare the uncertainty of these data products.

Obviously it’s problematic when the target measurements are different, but the criteria Mears et al. used to produce ensembles may have exaggerated the degree of wiggling compared to that used by Morice et al.

That wasn’t clear to me at all, perhaps you can explain further.

An explanation of the ensemble process may have tempered to your debate with richardscourtney on the Monday mirth WUWT post you referred to earlier.

Doubtful … well no, not doubtful, I did try that. I don’t think he argues in good faith, the feeling is obviously mutual, and that was a very typical exchange between us.

Even Dr. Cowtan pointed out in Figure 3 that the HadCRUT4 ensembles omit some sources of uncertainty. It is a bit disingenuous to leave out that last sentence of the caption on the figures you posted.

He made those corrections to the captions well after my first post, not exactly sure when. I did pick up that the MET had informed him of his oversight, and updated the Monday Mirth thread on WUWT with the results of his calculations, which amounted to a 7% increase in uncertainty for HADCRUT4, no change to RSS.
I don’t know how he could have updated the actual plot in Figure 3 since the published Hadley ensemble members don’t include all the uncertainty, which is annoying.
—— break ——

Each IPCC report increased the level of confidence that humans are responsible for more than half of global warming. Yet there has been no refinement on the estimates of CO2 sensitivity and climate models overestimate global temperatures.

Where the 50% attribution line falls on the distribution depends on the shape of the distribution, not just the upper and lower bounds. All that’s required for that 50% line to move is for the bulk of estimates to move away from the tails. I credit the IPCC for not tossing the outliers just so they could tighten up the range.

If sensitivity is low, how can humans be blamed for the warming? That’s over-certainty.

There are two tails on that distribution. Something else to keep in mind; CO2 sensitivity is a function of climate sensitivity to ANY external forcing, natural or not. Error propagates, and any uncertainty in that estimate flows through everything.

Chic Bowdrie
Reply to  Brandon Gates
January 28, 2016 9:11 pm

Brandon,

That wasn’t clear to me at all, perhaps you can explain further [the effect of ensemble production on trend uncertainty].

Mears et al. and Morice et al. produced ensembles representative of a set of data points for a given day of global temperatures. This may or may not be a correct statement as I’m not familiar with this methodology. Presumably they didn’t produce the ensembles specifically for the purpose of comparing surface vs. satellite measurements. Had that been the case, could criteria for producing ensembles have been standardized to prevent the statistical uncertainty from either measurement biasing the trend’s structural uncertainty? Again, I don’t know the methodology so I don’t even know if that makes sense.

Where the 50% attribution line falls on the distribution depends on the shape of the distribution, not just the upper and lower bounds. All that’s required for that 50% line to move is for the bulk of estimates to move away from the tails.

I don’t see what this has to do with the fact that there has been no zeroing in on how much CO2 affects global temperatures or the degree to which humans have to do with it.

Something else to keep in mind; CO2 sensitivity is a function of climate sensitivity to ANY external forcing, natural or not. Error propagates, and any uncertainty in that estimate flows through everything.

I don’t get this either, but it seems to have something to do with uncertainty being a lot more than climate scientists want to admit.

Chic Bowdrie
Reply to  Brandon Gates
January 29, 2016 7:45 am

Brandon,
One more thing:

We actually expect the troposphere to vary more than the surface, which we’d reasonably expect to make any tropospheric measurement by any means “look” more uncertain than they actually are.

There is a problem with that sentence’s syntax. Also, who is we and why do you expect the troposphere to vary more than the surface? Please define what variability you are referring to. For example, I would expect a similar diurnal temperature swing at the surface compared to the lower troposphere and both to have a larger swing compared to the mid tropopause at a specific location.
Another source of variability is from the number of measurements at a specific location. For the average temperature at a specific location on any given day, I would expect the standard deviation of the measurements to decrease with an increasing number of measurements. Do satellites take more measurements than surface thermometers?

RACookPE1978
Editor
Reply to  Chic Bowdrie
January 29, 2016 8:27 am

Chic Bowdrie

There is a problem with that sentence’s syntax. Also, who is we and why do you expect the troposphere to vary more than the surface? Please define what variability you are referring to.

Absent a cold front or storm, what is the actual daily swing of atmospheric temperatures at
1. Sea level
2. At 1000 meters? (Above daily changes of much of the earth surface)
3. At 5000 meters? (Above almost all of the earth’s surface)
4. At 10,000 meters?
5. At 20,000 meters?
6. At 30,000 meters?
When a cold front comes through, how high up are the storm front temperatures measured? (What is bottom, middle and top altitudes of common “storm clouds” – not thunderstorms?)

Chic Bowdrie
Reply to  Brandon Gates
January 29, 2016 10:07 am

RACookPE1978,
Assuming your questions aren’t rhetorical, I can only answer based on typical atmosphere temperature profiles which will vary depending on latitude. In general, the magnitude of the daily swing decreases from sea level to the tropopause (about 10 km) and increases at higher altitudes.
I have no idea how temperatures are measured during storms. If that was my job, I’d be glad they don’t occur too often.

Brandon Gates
Reply to  Brandon Gates
January 29, 2016 4:49 pm

Chic Bowdrie,

Mears et al. and Morice et al. produced ensembles representative of a set of data points for a given day of global temperatures. This may or may not be a correct statement as I’m not familiar with this methodology.

I think it’s probably close enough to true for purposes of discussing multi-decadal trends.

Presumably they didn’t produce the ensembles specifically for the purpose of comparing surface vs. satellite measurements.

Probably also a true enough assumption for our discussion.

Had that been the case, could criteria for producing ensembles have been standardized to prevent the statistical uncertainty from either measurement biasing the trend’s structural uncertainty? Again, I don’t know the methodology so I don’t even know if that makes sense.

It makes sense. Since I don’t have expertise in these methodologies, it’s beyond my present capability to answer it definitively. Way I see it, I have only two objective choices:
1) accept them both as reasonable estimates of uncertainty and consider them comparable
2) reject both of them as reasonable estimates of uncertainty and not make the comparison

I don’t see what this has to do with the fact that there has been no zeroing in on how much CO2 affects global temperatures or the degree to which humans have to do with it.

Some review is in order, my fault for not doing it sooner. In AR4, the IPCC only considered anthropogenic GHG forcings in their attribution statement:
Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations.
In AR5, the IPCC added “other anthropogenic forcings”:
It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.
So we have an apples and oranges problem here.

I don’t get this either, but it seems to have something to do with uncertainty being a lot more than climate scientists want to admit.

Odd point to make since I’m getting it directly from them, Knutti and Hegerl (2008), The equilibrium sensitivity of the Earth’s temperature to radiation changes: http://www.image.ucar.edu/idag/Papers/Knutti_nature08.pdf
They even share your lament right in the abstract:
Various observations favour a [CO2] climate sensitivity value of about 3°C, with a likely range of about 2–4.5°C. However, the physics of the response and uncertainties in forcing lead to fundamental difficulties in ruling out higher values. The quest to determine climate sensitivity has now been going on for decades, with disturbingly little progress in narrowing the large uncertainty range.
An excerpt from the first paragraph:
Climate sensitivity cannot be measured directly, but it can be estimated from comprehensive climate models. It can also be estimated from climate change over the twentieth century or from short-term climate variations such as volcanic eruptions, both of which were observed instrumentally, and from climate changes over the Earth’s history that have been reconstructed from palaeoclimatic data. Many model-simulated aspects of climate change scale approximately linearly with climate sensitivity, which is therefore sometimes seen as the ‘magic number’ of a model. This view is too simplistic and misses many important spatial and temporal aspects of climate change. Nevertheless, climate sensitivity is the largest source of uncertainty in projections of climate change beyond a few decades1–3 and is therefore an important diagnostic in climate modelling4,5.

There is a problem with that sentence’s syntax.

Big time. Try: We expect more variability in the troposphere than at the surface. Therefore, it would be reasonable to expect tropospheric measurements to “look” more uncertain due to their higher variability.

Also, who is we and why do you expect the troposphere to vary more than the surface?

We being humanity by way of scientific research. Why is complex, and my non-expertise is such that a short answer citing the primary causes would not be very good. Tropical convection, lapse rate feedback and latent heat transfer are the things I most commonly see mentioned.

Please define what variability you are referring to.

Monthly, seasonal, annual, interannual, decadal … even interdecadal. Long-term trends are expected to be amplified as well.

For example, I would expect a similar diurnal temperature swing at the surface compared to the lower troposphere and both to have a larger swing compared to the mid tropopause at a specific location.

If you say so. I would actually expect air to change temperature more rapidly than ground or bodies of water for any given change in net energy flux. Nighttime and/or wintertime inversions can create exceptions to those rules. Not that understanding daily cycles isn’t important, but we’ve gotten to the point that we’re talking way more about weather than climate now.

Another source of variability is from the number of measurements at a specific location. For the average temperature at a specific location on any given day, I would expect the standard deviation of the measurements to decrease with an increasing number of measurements. Do satellites take more measurements than surface thermometers?

Dunno. It’s also apples and oranges again. The sats have to combine nadir and limb readings (straight down and sideways) to derive the lower troposphere, which means they’re not looking at exactly the same place at the same time. They’re reading microwave emissions from oxygen atoms through the entire thickness of the atmosphere when they do this, so there’s a volume component to consider as well. It’s not very “pinpoint” in spatial terms. And there is some ground-clutter to remove as well.
OTOH, in temporal terms, a single typical weather station gives us only a daily min/max. They don’t move around as a normal course of operation, though we don’t always know when they’ve moved nor when some idiot put one next to an air-conditioning unit or decided to build a parking lot and not tell someone about it.
We can puzzle through these pros and cons together anecdotally, but I for one cannot weight them quantitatively in my head. For that, I prevail on experts to tell me the estimated uncertainties.

Chic Bowdrie
Reply to  Brandon Gates
January 29, 2016 10:40 pm

Brandon,

Odd point to make since I’m getting it directly from them, Knutti and Hegerl (2008)

It’s clear to me that the whole sensitivity issue is fraught with inconsistencies, bad physics, and specious exaggeration of global warming. If you want to discuss this further, start by reading comments by AlecM, William Astley, and Dr. Happer at this link: https://tallbloke.wordpress.com/2016/01/21/can-a-gaseous-atmosphere-really-radiate-as-black-body/ and this paper: http://www.john-daly.com/forcing/hug-barrett.htm

If you say so. I would actually expect air to change temperature more rapidly than ground or bodies of water for any given change in net energy flux.

This addresses the issue of what variability we’re talking about. As you point out, the daily surface measurement is a high/low average. Right there you’ve introduced a bias. The greater the difference between the high and low (amplitude), the greater the potential for bias. If I interpreted this paper correctly, it confirms my expectations: http://www.arl.noaa.gov/documents/JournalPDFs/SeidelFreeWang.JGR2005.pdf

Not that understanding daily cycles isn’t important, but we’ve gotten to the point that we’re talking way more about weather than climate now.

We’re talking about uncertainty in data measurements and how they are processed. I think it’s relevant, but there’s no need to continue.

We can puzzle through these pros and cons together anecdotally, but I for one cannot weight them quantitatively in my head. For that, I prevail on experts to tell me the estimated uncertainties.

Being a skeptic, I can’t do that. At least I know a lot more now about what I didn’t know, so thanks for puzzling.

Chic Bowdrie
Reply to  Brandon Gates
January 29, 2016 10:45 pm

Brandon,

Odd point to make since I’m getting it directly from them, Knutti and Hegerl (2008)

It’s clear to me that the whole sensitivity issue is fraught with inconsistencies, bad physics, and specious exaggeration of global warming. If you want to discuss this further, start by reading comments by AlecM, William Astley, and Dr. Happer at this link: https://tallbloke.wordpress.com/2016/01/21/can-a-gaseous-atmosphere-really-radiate-as-black-body/ and this paper: http://www.john-daly.com/forcing/hug-barrett.htm

If you say so. I would actually expect air to change temperature more rapidly than ground or bodies of water for any given change in net energy flux.

This addresses the issue of what variability we’re talking about. As you point out, the daily surface measurement is a high/low average. Right there you’ve introduced a bias. The greater the difference between the high and low (amplitude), the greater the potential for bias. If I interpreted this paper correctly, it confirms my expectations: http://www.arl.noaa.gov/documents/JournalPDFs/SeidelFreeWang.JGR2005.pdf

Not that understanding daily cycles isn’t important, but we’ve gotten to the point that we’re talking way more about weather than climate now.

We’re talking about uncertainty in data measurements and how they are processed. I think it’s relevant, but there’s no need to continue.

We can puzzle through these pros and cons together anecdotally, but I for one cannot weight them quantitatively in my head. For that, I prevail on experts to tell me the estimated uncertainties.

Being a skeptic, I can’t do that. But I know a lot more about what I didn’t know, so thanks for puzzling.

Brandon Gates
Reply to  Brandon Gates
January 30, 2016 5:22 pm

Chic Bowdrie,

It’s clear to me that the whole sensitivity issue is fraught with inconsistencies, bad physics, and specious exaggeration of global warming. If you want to discuss this further, start by reading comments by AlecM, William Astley, and Dr. Happer at this link:

I’d rather focus on one topic at a time, but I will at least read the links. Where I recall you and I leaving the radiative physics question was with the elevator analogy for surface to tropopause convection. My final comment to you suggested that you consider for every up elevator in the system is one going down.

This addresses the issue of what variability we’re talking about. As you point out, the daily surface measurement is a high/low average. Right there you’ve introduced a bias.

When I think of bias in this context, I think of a non-random error which affects the trend of a central estimate. A min/max reading guarantees that the calculated mean and median value are the same on any given day for any given single instrument, which is not a desirable feature. Then we have TOBS issues, different instrument types not returning the same min/max values due to their response times, siting issues, different enclosure types. Etc. Right? Right.
Compare the satellites. They’re in sun-synchronous polar orbits, designed such that for any one complete orbit they get readings at approximate local noon on the daylight side of the planet, and readings at approximate local midnight on the nighttime side of the planet. NOAA-15 has an orbital period of 101.01 minutes, or about 14.25 orbits per mean solar day. The orbits of the other sats are not such that covering the same track at roughly solar noon or midnight for any one location is guaranteed. Their orbits are all decaying and precessing at different rates.
Not only do the sats not give us a daily min/max for a single location on a daily basis, they don’t give us the same set of observations for any location on a daily basis.
Doesn’t make their data junk, does make their data difficult to use because they’re constantly introducing known biases to trends which must be corrected for.

The greater the difference between the high and low (amplitude), the greater the potential for bias. If I interpreted this paper correctly, it confirms my expectations:

Ok, here’s the abstract:
Diurnal cycle of upper-air temperature estimated from radiosondes
Dian J. Seidel and Melissa Free
Air Resources Laboratory, NOAA, Silver Spring, Maryland, USA
Junhong Wang
Earth Observing Laboratory, NCAR, Boulder, Colorado, USA
Received 18 October 2004; revised 17 February 2005; accepted 8 March 2005; published 3 May 2005.
[1] This study estimates the amplitude and phase of the climatological diurnal cycle of temperature, from the surface to 10 hPa. The analysis is based on four-times-daily radiosonde data from 53 stations in four regions in the Northern Hemisphere, equatorial soundings from the Tropical Ocean Global Atmosphere/Coupled Ocean Atmosphere Response Experiment, and more recent eight-times-daily radiosonde data from the Atmospheric Radiation Measurement program’s Central Facility in Oklahoma. Our results are in general qualitative agreement with earlier studies, with some quantitative differences, but provide more detail about vertical, seasonal, and geographic variations. The amplitude of the diurnal cycle (half the diurnal temperature range) is largest (1 to 4 K) at the surface. At 850 hPa and above, the regional-average amplitudes are <1 K throughout the troposphere and stratosphere. The amplitude of the diurnal cycle in the boundary layer is larger over land than over ocean, and generally larger in summer than winter (except for monsoon regions, where it is larger in the dry season). In the upper-troposphere and stratosphere, land-sea and seasonal differences are not prominent. The diurnal cycle peaks a few hours after local noon at the surface, a few hours later at 850 hPa, and somewhat earlier in the upper troposphere. The timing of the diurnal cycle peak in the stratosphere is more uncertain. Radiosonde data are also used to simulate deep-layer mean temperatures that would be observed by the satellite-borne microwave sounding unit, and the amplitude and phase of their diurnal cycles are estimated. An evaluation is made of the uncertainty in these results due to the temporal resolution of the sounding data, which is only barely adequate for resolving the first harmonic of the diurnal cycle, the precision of radiosonde temperature data, and potential biases in daytime stratospheric temperature observations.

So my statement …
I would actually expect air to change temperature more rapidly than ground or bodies of water for any given change in net energy flux.
… was dead wrong according to this study, and now I know something I didn’t about diurnal temperature cycles at altitude vs. the surface, for which you have my thanks.
The final two sentences of the abstract bear repeating:
Radiosonde data are also used to simulate deep-layer mean temperatures that would be observed by the satellite-borne microwave sounding unit, and the amplitude and phase of their diurnal cycles are estimated. An evaluation is made of the uncertainty in these results due to the temporal resolution of the sounding data, which is only barely adequate for resolving the first harmonic of the diurnal cycle, the precision of radiosonde temperature data, and potential biases in daytime stratospheric temperature observations.
This paper does not address my statement about what kind of variability I was talking about:
Monthly, seasonal, annual, interannual, decadal … even interdecadal. Long-term trends are expected to be amplified as well.
For that, here’s a start: http://pubs.giss.nasa.gov/abs/sa04100j.html
Santer et al. 2005
Santer, B.D., T.M.L. Wigley, C. Mears, F.J. Wentz, S.A. Klein, D.J. Seidel, K.E. Taylor, P.W. Thorne, M.F. Wehner, P.J. Gleckler, J.S. Boyle, W.D. Collins, K.W. Dixon, C. Doutriaux, M. Free, Q. Fu, J.E. Hansen, G.S. Jones, R. Ruedy, T.R. Karl, J.R. Lanzante, G.A. Meehl, V. Ramaswamy, G. Russell, and G.A. Schmidt, 2005: Amplification of surface temperature trends and variability in the tropical atmosphere. Science, 309, 1551-1556, doi:10.1126/science.1114867.
The month-to-month variability of tropical temperatures is larger in the troposphere than at the Earth’s surface. This amplification behavior is similar in a range of observations and climate model simulations, and is consistent with basic theory. On multi-decadal timescales, tropospheric amplification of surface warming is a robust feature of model simulations, but occurs in only one observational dataset. Other observations show weak or even negative amplification. These results suggest that either different physical mechanisms control amplification processes on monthly and decadal timescales, and models fail to capture such behavior, or (more plausibly) that residual errors in several observational datasets used here affect their representation of long-term trends.

We’re talking about uncertainty in data measurements and how they are processed. I think it’s relevant, but there’s no need to continue.

You make a fair point.

“We can puzzle through these pros and cons together anecdotally, but I for one cannot weight them quantitatively in my head. For that, I prevail on experts to tell me the estimated uncertainties.”
Being a skeptic, I can’t do that.

You’re not the only skeptic in this conversation, Chic. And you do apparently trust expert opinion when it confirms your expectations, just as you wrote when you cited Wang (2005) above. In that respect, I don’t think I work any differently than you do.
Trust in expert analysis and/or opinion need not lack skepticism, and should not, IMO. I don’t think it should imply lack of skepticism either. Only other options I can see are to not believe anything, or simply make up our own reality — neither of which I’m personally willing to do if I can avoid it. Nature of the beast until we both know all there is to know about this stuff, which I assume will be never.

But I know a lot more about what I didn’t know, so thanks for puzzling.

Any time. You’ve done the same for me as well.

Chic Bowdrie
Reply to  Brandon Gates
January 31, 2016 12:56 am

Brandon,

Where I recall you and I leaving the radiative physics question was with the elevator analogy for surface to tropopause convection. My final comment to you suggested that you consider for every up elevator in the system is one going down.

Yes, I remember having trouble answering this question of yours from the Lindzen post:

161 W/m^2 at ground level is the global average for absorbed solar according to Trenberth and Kiehl. 396 W/m^2 is the global average upwelling LW from the surface according to same. Please explain the difference?

The difference is 235 W/m2 which is the baseline emission that is reinforced every day by incoming insolation. If the sun never came up again, the 235 W/m2 at the surface and at the top of the atmosphere would gradually drop until a new equilibrium was established due to cooling of the earth’s core.

When I think of bias in this context, I think of a non-random error which affects the trend of a central estimate.

Correct.

A min/max reading guarantees that the calculated mean and median value are the same on any given day for any given single instrument, which is not a desirable feature.

Why do you say not a desirable feature? The mean calculated from a minimum and maximum is not the same as a mean calculated from an average of temperatures taken every hour over a 24 hour period. Check out this data for Ames, Iowa showing the min/max average is usually greater than the average of the hourly measurements: http://mesonet.agron.iastate.edu/onsite/features/cat.php?day=2009-07-31
With regard to the other variability, monthly, seasonal, etc. I think each has to be considered in the context of the measurements and the experimental objective. Santer et al. paper mentions month-to-month tropical data comparing surface to troposphere. But the surface is thermometers and the troposphere is measured by satellites. So, is the month-to-month variability due to the locations or to the instrument differences? The long term trends show the troposphere warms less than the surface, so I don’t really get what the point of the paper is. It certainly doesn’t indicate a preference for either one of the measurement techniques.

You’re not the only skeptic in this conversation, Chic. And you do apparently trust expert opinion when it confirms your expectations, just as you wrote when you cited Wang (2005) above. In that respect, I don’t think I work any differently than you do.

Forgive me for assuming you support the AGW position, as opposed to a Skeptic like me, rather than just being genuinely skeptical. What defines expert? What if an expert is wrong? I don’t consider scientists, especially climate scientists, experts just because they publish. Therefore I don’t know how expert Seidel, Free, and Wang (2005) are. DJ Seidel was a co-author on the Santer et al. paper which I don’t think much of. But I could understand the Seidel et al. paper well enough to know that, right or wrong, it supported the point that I was making.
As long as you’re pursuing scientific truth, then I agree you don’t work any differently than I do.

Chic Bowdrie
Reply to  Brandon Gates
January 31, 2016 4:57 am

Brandon,

Where I recall you and I leaving the radiative physics question was with the elevator analogy for surface to tropopause convection. My final comment to you suggested that you consider for every up elevator in the system is one going down.

Yes, I remember having trouble answering this question of yours from the Lindzen post:

W/m^2 at ground level is the global average for absorbed solar according to Trenberth and Kiehl. 396 W/m^2 is the global average upwelling LW from the surface according to same. Please explain the difference?

The difference is 235 W/m2 which is the baseline emission that is reinforced every day by incoming insolation. If the sun never came up again, the 235 W/m2 at the surface and at the top of the atmosphere would gradually drop until a new equilibrium was established due to cooling of the earth’s core.

When I think of bias in this context, I think of a non-random error which affects the trend of a central estimate.

Correct.

A min/max reading guarantees that the calculated mean and median value are the same on any given day for any given single instrument, which is not a desirable feature.

Why do you say not a desirable feature? The mean calculated from a minimum and maximum is not the same as a mean calculated from an average of temperatures taken every hour over a 24 hour period. Check out this data for Ames, Iowa showing the min/max average is usually greater than the average of the hourly measurements: http://mesonet.agron.iastate.edu/onsite/features/cat.php?day=2009-07-31
With regard to the other variability, monthly, seasonal, etc. I think each has to be considered in the context of the measurements and the experimental objective. Santer et al. paper mentions month-to-month tropical data comparing surface to troposphere. But the surface is thermometers and the troposphere is measured by satellites. So is the month-to-month variability due to the locations or to the instrument differences? The long term trends show the troposphere warms less than the surface, so I don’t really get what the point of the paper is. It certainly doesn’t indicate a preference for either one of the measurement techniques.

You’re not the only skeptic in this conversation, Chic. And you do apparently trust expert opinion when it confirms your expectations, just as you wrote when you cited Wang (2005) above. In that respect, I don’t think I work any differently than you do.

Forgive me for assuming you support the AGW position, as opposed to a Skeptic like me, rather than just being genuinely skeptical. What defines expert? What if an expert is wrong? I don’t consider a scientist, especially a climate scientist, an expert just because they publish. Therefore I don’t know how expert Seidel, Free, and Wang (2005) are. DJ Seidel was a co-author on the Santer et al. paper which I don’t think much of. But I could understand the Seidel et al. paper well enough to know that, right or wrong, it supported the point that I was making.
As long as you’re pursuing scientific truth, then I agree you don’t work any differently than I do.

Chic Bowdrie
Reply to  Brandon Gates
January 31, 2016 11:50 am

Forgive me for assuming you support the AGW position, as opposed to a Skeptic like me, rather than just being genuinely skeptical.

I meant “generally” skeptical.

Brandon Gates
Reply to  Brandon Gates
January 31, 2016 3:45 pm

Chic Bowdrie,

Yes, I remember having trouble answering this question of yours from the Lindzen post:
“161 W/m^2 at ground level is the global average for absorbed solar according to Trenberth and Kiehl. 396 W/m^2 is the global average upwelling LW from the surface according to same. Please explain the difference?”
The difference is 235 W/m2 which is the baseline emission that is reinforced every day by incoming insolation. If the sun never came up again, the 235 W/m2 at the surface and at the top of the atmosphere would gradually drop until a new equilibrium was established due to cooling of the earth’s core.

For reference the post is: http://wattsupwiththat.com/2015/12/26/lindzen-a-recent-exchange-in-the-boston-globe-clearly-illustrated-the-sophistic-nature-of-the-defense-of-global-warming-alarm/#comment-2108990
Your preceding argument was: Also increasing CO2 doesn’t slow down the rate at which the energy is going up. That is determined by how much solar insolation gets on at ground level.
161 W/m^2 – 396 W/m^2 = – 235 W/m2
That’s not a “baseline emission”, that’s a net loss — not from TOA, but from the surface.
Your lead comment in that post was: The way I see it, those six photons get on the next elevator and therefore 94% of the energy is going up by convection.
To which I replied: Keep in mind that for every up elevator in the world there is one which is going down.
These are both accounting problems. You don’t get the correct answer in accounting by only looking at one side of the ledger. I strongly suggest that this is the reason you’re having trouble answering that first question.
Oh, to follow up, I did look at both links you provided last post.
The Tallbloke post refers to this paper: Schmithüsen et al. (2015), How increasing CO2 leads to an increased negative greenhouse effect in Antarctica: http://onlinelibrary.wiley.com/doi/10.1002/2015GL066749/full
Abstract
CO2 is the strongest anthropogenic forcing agent for climate change since preindustrial times. Like other greenhouse gases, CO2 absorbs terrestrial surface radiation and causes emission from the atmosphere to space. As the surface is generally warmer than the atmosphere, the total long-wave emission to space is commonly less than the surface emission. However, this does not hold true for the high elevated areas of central Antarctica. For this region, the emission to space is higher than the surface emission; and the greenhouse effect of CO2 is around zero or even negative, which has not been discussed so far. We investigated this in detail and show that for central Antarctica an increase in CO2 concentration leads to an increased long-wave energy loss to space, which cools the Earth-atmosphere system. These findings for central Antarctica are in contrast to the general warming effect of increasing CO2.

Which I think is an elegant argument, and makes quite a bit of sense. It’s also a really seductive argument because there are a number of startling implications that I shall enjoy pondering after I’ve read the full paper a few times. But I digress. Tallbloke continues:
Right in eqn1 of Schmitthüsen they define the atmosphere as a black body radiator, by:
“the emission of the atmosphere ε(index atm) x σ x T(index atm)4”
Gases are no black body radiators. Never. And never were! Not even “greenhouse gases”.

Which is absolutely true. Tallbloke continues:
So when you do start with a nonsensical assumption, what value does the rest have?
It sometimes helps if one reads the rest:
Equations (1) and (2) do not distinguish between the different greenhouse gases and do not describe the dependency of wavelength and altitude. But the two simple equations allow us to provide some insight into the combined emission of a surface and the atmosphere above as a function of temperature difference and trace gas concentrations. A detailed line-by-line calculation is presented in the next chapter.
True to their word, they do just that with equation (3) being the total GHE of the entire atmosphere integrated across all wavelengths, and (4) being the GHE due to CO2 integrated between 5 and 200 microns. Not black body calculations. Not even gray body calculations, which is what equations (1) and (2) really are.
Now the John Daly post on Hug and Barrett. Skipping to the conclusions …
2. It must be recognized that Kirchhoff’s law applies only to systems in thermal equilibrium.
… is technically correct about how Kirchoff’s law is sometimes stated, but misleading. Rewinding to the body text, we read:
Applicability of Kirchhoff’s Law
The above proposed mechanism of terrestrial radiation being absorbed by the ‘greenhouse’ gases, producing rotationally and vibrationally excited states that are then mainly degraded to their ground states by the conversion of their excitation energy into the translational energy of colliding molecules of dinitrogen and dioxygen has been criticised as violating Kirchhoff’s law. This law specifies that a good absorber is a good emitter and the IPCC supporters have interpreted this to indicate that they should in equal measure emit the terrestrial radiation absorbed by the greenhouse gases. This is true only for a system in thermal equilibrium.

Emphasis added. The final sentence is absolutely correct, but again misleading because the statement in bold is NOT how the IPCC interpret Kirchoff’s law. It is, however, how Ferenc Miskolczi apparently interprets Kirchoff, Roy Spencer’s treatment may be of interest to you: http://www.drroyspencer.com/2010/08/comments-on-miskolczi%E2%80%99s-2010-controversial-greenhouse-theory/
H&B cite the IPCC TAR to support this argument: 1. Climate Change 2000, The Scientific Basis, TAR Working Group 1 Report, p. 90, Fig. 1.2
We can find that here: https://www.ipcc.ch/ipccreports/tar/wg1/pdf/TAR-01.PDF
Figure 1.2 is the familiar energy budget cartoon from: Kiehl and Trenberth, 1997: Earth’s Annual Global Mean Energy Budget. I find nothing in that figure to support the contention: This law specifies that a good absorber is a good emitter and the IPCC supporters have interpreted this to indicate that they should in equal measure emit the terrestrial radiation absorbed by the greenhouse gases.

Why do you say not a desirable feature? The mean calculated from a minimum and maximum is not the same as a mean calculated from an average of temperatures taken every hour over a 24 hour period.

That’s why a daily min/max for any given fixed location is not a desirable feature. The sats don’t even give you that. What is your response?

With regard to the other variability, monthly, seasonal, etc. I think each has to be considered in the context of the measurements and the experimental objective.

True. My main objective in this discussion is detecting climatically-relevant long-term temperature trends. To me that means on the order of 30 years or greater, the longer the better. For satellite data, we’re limited to 1979-present, which may or may not be sufficient for comparison, but it’s all we have. [1]

Santer et al. paper mentions month-to-month tropical data comparing surface to troposphere. But the surface is thermometers and the troposphere is measured by satellites. So is the month-to-month variability due to the locations or to the instrument differences?

I wouldn’t rule out either, and I think the answer is probably both. Figuring how much of each is where theory should be able to help, which ultimately means models based on theory and constrained by observation. Don’t forget radiosondes, they’re an important part of the mix. The more data from different sources, the better.

The long term trends show the troposphere warms less than the surface, so I don’t really get what the point of the paper is.

I thought the abstract laid out its case fairly clearly:
The month-to-month variability of tropical temperatures is larger in the troposphere than at the Earth’s surface. This amplification behavior is similar in a range of observations and climate model simulations, and is consistent with basic theory. On multi-decadal timescales, tropospheric amplification of surface warming is a robust feature of model simulations, but occurs in only one observational dataset. Other observations show weak or even negative amplification. These results suggest that either different physical mechanisms control amplification processes on monthly and decadal timescales, and models fail to capture such behavior, or (more plausibly) that residual errors in several observational datasets used here affect their representation of long-term trends.

It certainly doesn’t indicate a preference for either one of the measurement techniques.

I wouldn’t expect them to. The purpose of the paper is comparing surface to upper air temperature trends in an attempt to demonstrate theory/model-predicted tropospheric amplification. The way I’m reading it, that attempt somewhat succeeded, they see it in one observational dataset, RSS:
On decadal time scales, however, only one observed data set (RSS) shows amplification behavior that is generally consistent with model results. The correspondence between models and observations on monthly and annual time scales does not guarantee that model scaling ratios are valid on decadal time scales. However, given the very basic nature of the physics involved, this high-frequency agreement is suggestive of more general validity of model scaling ratios across a range of time scales.
They didn’t get that result on decadal time scales from UAH, nor from the two radiosonde products, RATPAC and HadAT2. It could be argued that they express an implicit preference for RSS over any of the other upper air data products, but that shouldn’t be taken to mean that they prefer RSS to surface data — that’s not the point of the paper. I think their intent is better understood reading the abstract, and final paragraph of the paper:
We have used basic physical principles as represented in current climate models, for interpreting and evaluating observational data. Our work illustrates that progress toward an improved understanding of the climate system can best be achieved by combined use of observations, theory, and models. The availability of a large range of model and observational surface and atmospheric temperature data sets has been of great benefit to this research, and highlights the dangers inherent in drawing inferences on the agreement between models and observations without adequately accounting for uncertainties in both.
Or as I sometimes put it, all models and observations are always wrong, the questions are by how much and why.
Alluding to what you said above, preference for a measurement technique should be dictated by its estimated reliability and suitability for purpose. Sometimes though, we just have to take what we can get and try to make it work — I saw a lot of that in this paper.

Forgive me for assuming you support the AGW position, as opposed to a Skeptic like me, rather than just being genuinely skeptical.

lol. I thought it was clear that I support the AGW position. I’m balking at the suggestion that my general acceptance of AGW theory means I’m not skeptical of it.

What defines expert?

I use it mostly as a relative term. Someone with a doctorate in physics and several decades of published research under their belt in that field is certainly more expert at physics than I am.

What if an expert is wrong?

What do you mean “if”.

I don’t consider a scientist, especially a climate scientist, an expert just because they publish.

Especially a climate scientist. At least you’re aware that you’ve singled out one particular field from all the others.

Therefore I don’t know how expert Seidel, Free, and Wang (2005) are. DJ Seidel was a co-author on the Santer et al. paper which I don’t think much of. But I could understand the Seidel et al. paper well enough to know that, right or wrong, it supported the point that I was making.

Sure. We tend to trust statements about things we already believe or expect to be true, no matter who says them. Confirmation bias. I’m not immune to it, nobody is.

As long as you’re pursuing scientific truth, then I agree you don’t work any differently than I do.

That’s a slight shift from what we were talking about but not entirely out of bounds. I think skepticism is an important component of any truth-seeking endeavor. That we both claim to be skeptical truth-seekers is a commonality. I’m not convinced that our approaches are the same, but I must not disregard the fact that I have my own biases and that they are yelping at me right now.
A pleasure as usual, cheers.
—————
[1] An interesting side note on this: I recently read a blog post by Dr. Spencer saying that there are potentially useful data prior to 1979 on tape at NASA, and IIRC, something about NASA not being able to find it.

Chic Bowdrie
Reply to  Brandon Gates
January 31, 2016 10:34 pm

Brandon,

I strongly suggest that this is the reason you’re having trouble answering that first question.

I answered it, but not well enough apparently. The net loss from the surface is zero. If there was a non-zero net loss, the surface temperature wouldn’t average 288K anymore. The equation is 161 + 235 = 396. The 161 is actually 322/2, because it comes in only during day. The point is, on average, every day, 235 W/m2 is being emitted from the surface in the morning and an additional 322/2 adds to it during the day. Meanwhile at the TOA there is an average 235 W/m2 emitting to space, which provides energy balance. This is heat transfer 101, but Kiehl-Trenberth diagrams obfuscate it by inserting the recycled backradiation nonsense. And yes, it is an accounting problem, but the LWIR cancels on the bottom floors. The bulk of the energy is going up the elevator to the top floor by convection. Then radiation takes over getting the energy out to space.

Oh, to follow up, I did look at both links you provided last post.

I’m impressed. I didn’t read Schmithüsen et al. (2015), because it was peripheral to what I was interested in. Now I’ll have to take a look at it. There is also further discussion of it on tallbloke’s which you may have already followed.

I find nothing in [the Kiehl-Trenberth energy budget diagram] to support the contention: This law specifies that a good absorber is a good emitter and the IPCC supporters have interpreted this to indicate that they should in equal measure emit the terrestrial radiation absorbed by the greenhouse gases.

The alternative is to acknowledge, as you have, that collisions predominate when the atmosphere is dense. Therefore in the lower troposphere, IR active gases absorb more than they emit.

That’s why a daily min/max for any given fixed location is not a desirable feature. The sats don’t even give you that. What is your response?

I’m glad you were aware of that. I wasn’t sure.

We tend to trust statements about things we already believe or expect to be true, no matter who says them. Confirmation bias. I’m not immune to it, nobody is.

The best immunity is simply to seek the truth, scientific or otherwise.

Brandon Gates
Reply to  Brandon Gates
February 2, 2016 7:09 pm

Chic Bowdrie,

This is heat transfer 101, but Kiehl-Trenberth diagrams obfuscate it by inserting the recycled backradiation nonsense.

Yes it is heat transfer 101. IIRC, I’ve shown you previously by tallying up ALL the values, each of the three levels in the diagram net roughly to zero. In “perfect” radiative equilibrium (which never happens in reality) and without rounding error, each of those three layers would net to exactly zero.
Suppose the long-term average NET flux at ANY layer in that diagram was something significantly non-zero. Would you expect average temperature over the same interval to change or remain the same?

And yes, it is an accounting problem, but the LWIR cancels on the bottom floors.

So do a complete accounting and separately tally up all the numbers in all three layers of the diagram yourself, then tell me your answer.

The bulk of the energy is going up the elevator to the top floor by convection. Then radiation takes over getting the energy out to space.

You keep skipping over the middle part. Emission/absorption is constantly occurring. What happens when a photon jumps off an up elevator on a middle floor and catches a down elevator? As well, what happens to the dryer, cooler air on the way back down?

I didn’t read Schmithüsen et al. (2015), because it was peripheral to what I was interested in.

I think its conclusions have some rather interesting implications … I’ll be interested to see how often it’s cited, by whom, and for what.

The alternative is to acknowledge, as you have, that collisions predominate when the atmosphere is dense. Therefore in the lower troposphere, IR active gases absorb more than they emit.

Already acknowledged last time we talked about this. That IS the IPCC argument for thermalization in the lower troposphere as I understand it. More CO2, more net absorbers, more thermalization. Exact opposite occurs in the thinner stratosphere closer to space — more radiators closer to the exit, more cooling.

“That’s why a daily min/max for any given fixed location is not a desirable feature. The sats don’t even give you that. What is your response?”
I’m glad you were aware of that. I wasn’t sure.

Yes, I agree with you, “continuous” observation at a fixed location would be better. I’m asking you if you realize that the sats don’t even give you a daily min/max for any given location?

The best immunity is simply to seek the truth, scientific or otherwise.

Sure, I mean, clearly someone who isn’t interested in seeking truth simply won’t. The scientific method presumes good-faith truth-seeking and self-skepticism. Peer-review has found to be necessary because even in the ideal case of a good-faith truth-seeking scientist, self-skepticism has been known to fail (we become enamored with our own hypotheses, etc.) And then there are the bad-faith cases.

Chic Bowdrie
Reply to  Brandon Gates
February 3, 2016 8:01 am

Brandon,

IIRC, I’ve shown you previously by tallying up ALL the values, each of the three levels in the diagram net roughly to zero.

I don’t recall us discussing the Kiehl-Trenberth numbers at all levels, but there is no question they sum to zero at all levels. The point is the numbers are unphysical, because they are on average approximately correct only twice a day. At no time does 161 W/m2 hit the whole surface area of the planet at the same time. When evaluating the budgeting process, you also have to recognize that there was an initial balance that accumulated previously. From that point on there are deposits and withdrawals that average close to zero, but there is never a zero balance. This is crucial to realize and if you can’t fathom it, there is no point in us continuing to discuss the Kiehl-Trenberth diagram. Plus it prevents you from accepting what little role backradiation plays in the atmosphere’s heat transfer process.
Assuming the physical unreality of the energy budget and just working with the math, all levels balance and temperatures will adjust to changes in the long (or short) term average NET flux at ANY level. That’s not an issue for me.

You keep skipping over the middle part.

No, convection handles the middle part.

Emission/absorption is constantly occurring. What happens when a photon jumps off an up elevator on a middle floor and catches a down elevator?

Yes that is the point! For every potential photon (it’s not a photon until it’s emitted) going up or down, there is a 50:50 chance it goes up or down. So there is statistically very little difference in any net radiation going up or down except via the window where surface radiation takes the express to the top and at the top where thin air prevents collisions from absorbing upward bound photons.

More CO2, more net absorbers, more thermalization.

Another key point. CO2, or any other IR active gas, can only absorb the available radiation. Apparently, this amount never exceeds that which can be absorbed relatively close to the surface other than that which goes through the window directly to space. So the degree of thermalization is not affected by increasing CO2 or water vapor as long as they are present in sufficient concentration. What is affected is the rate of thermalization which is where convection comes in. The more radiation, the more absorption, the more thermalization, the more convection.

Exact opposite occurs in the thinner stratosphere closer to space — more radiators closer to the exit, more cooling.

Although it isn’t exactly the opposite phenomenon, I agree that more CO2 in the upper atmosphere should increase cooling, all else being equal. BTW, some of us believe that increasing CO2 should have a net cooling effect because of its contribution to enhancing convective cooling below and radiative cooling above.

I’m asking you if you realize that the sats don’t even give you a daily min/max for any given location?

I don’t know the satellite methodology. What is it and how is it worse than a high/low average taken of an incomplete set of non-homogeneous measurements?

Chic Bowdrie
Reply to  Brandon Gates
February 3, 2016 2:26 pm

BTW, some of us believe that increasing CO2 should have a net cooling effect because of its contribution to enhancing convection below and cooling above.

For example, my thoughts on Kiehl-Trenberth and convection vs. radiative energy transfer in the atmosphere seem to be shared by many at Tallbloke’s Talkshop:
https://tallbloke.wordpress.com/2013/02/19/david-cosserat-atmospheric-thermal-enhancement-part-ii-so-what-kind-of-heat-flow-throttling-do-you-favour/comment-page-1/#comment-44149

Brandon Gates
Reply to  Brandon Gates
February 3, 2016 4:50 pm

Chic Bowdrie,

I don’t recall us discussing the Kiehl-Trenberth numbers at all levels, but there is no question they sum to zero at all levels.

Probably was directed at someone else on a different thread then, my apologies for my bad memory. Salient point is that they should sum to zero at all levels in a theoretically steady-state equilibrium.

The point is the numbers are unphysical, because they are on average approximately correct only twice a day.

If the intent of the diagram were to represent fluxes for every clock-tick of the diurnal cycle for every cubic meter of the climate system, I would agree with you.

At no time does 161 W/m2 hit the whole surface area of the planet at the same time.

The fact that a globally averaged figure is what’s reported in the budget diagram does not mean that the supporting observations and calculations were done at such a highly-summarized resolution.

When evaluating the budgeting process, you also have to recognize that there was an initial balance that accumulated previously.

We’re never going to be able to trace every energy flux back to the back to the Big Bang. Holocene max temps were higher than the LIA which has some implications for residual heat in the deep oceans. If I know this, Trenberth and Co. surely do, plus more.

From that point on there are deposits and withdrawals that average close to zero, but there is never a zero balance.

Of course not, and nobody I’m aware of who does the actual research and writes the literature says otherwise. “Steady-state equilibrium” in this context does not mean an isothermal system with constant net input/output flux of exactly zero.

This is crucial to realize and if you can’t fathom it, there is no point in us continuing to discuss the Kiehl-Trenberth diagram.

I believe I have fathomed it as I have both read and thought a lot about it. I think what you have said above are justifiable arguments for a healthy amount of uncertainty in the globally averaged estimates. However, in my admittedly lay opinion, I don’t that see your arguments justifiably negate the physicality of those estimates.

Plus it prevents you from accepting what little role backradiation plays in the atmosphere’s heat transfer process.

I think you falsely presume a lack of understanding on my part, which is fine: you can’t possibly know everything I think I understand until I write about it.

Assuming the physical unreality of the energy budget and just working with the math, all levels balance and temperatures will adjust to changes in the long (or short) term average NET flux at ANY level. That’s not an issue for me.

Since you’ve noted above that all levels of the budget cartoon net to zero, doesn’t that at least suggest your assumption of bad physics is incorrect? IOW, how can both of your above statements be simultaneously true?

No, convection handles the middle part.

Convection goes both ways.

Yes that is the point! For every potential photon (it’s not a photon until it’s emitted) going up or down, there is a 50:50 chance it goes up or down.

That’s the 1D model. I’ve mostly been thinking and writing using a 2D model, but on review maybe this argument works better in 1D, so I’ll go with that for now.

“Emission/absorption is constantly occurring. What happens when a photon jumps off an up elevator on a middle floor and catches a down elevator?”
So there is statistically very little difference in any net radiation going up or down except via the window where surface radiation takes the express to the top and at the top where thin air prevents collisions from absorbing upward bound photons.

The only things riding the elevator all the way to the tropopause are sensible and latent heat. All LW radiation not in the window regions does NOT; it’s constantly hopping floors up or down, and because convection goes both ways, its net effect on radiative flux outside the window region is ZERO.

Another key point. CO2, or any other IR active gas, can only absorb the available radiation.

Agreed, and again, I know of nobody in “consensus” literature arguing otherwise.

Apparently, this amount never exceeds that which can be absorbed relatively close to the surface other than that which goes through the window directly to space.

Sure. I mean, how could it without violating any known laws of thermodynamics?

So the degree of thermalization is not affected by increasing CO2 or water vapor as long as they are present in sufficient concentration. What is affected is the rate of thermalization which is where convection comes in. The more radiation, the more absorption, the more thermalization, the more convection.

The implication then is that the system is doing more work. Think Carnot cycle. There are two ways we can increase the work done by a heat engine:
1) increase its efficiency
2) increase the temperature differential between the hot and cold reservoirs
… or we can do both. Your argument implies that the temperature differential remains constant, meaning that increasing CO2 leads to increased efficiency, option (1). If you agree with that, please explain how.

Although it isn’t exactly the opposite phenomenon, I agree that more CO2 in the upper atmosphere should increase cooling, all else being equal. BTW, some of us believe that increasing CO2 should have a net cooling effect because of its contribution to enhancing convective cooling below and radiative cooling above.

Yes I’m aware of the cooling at all levels argument. My standard response: explain Venus.

I don’t know the satellite methodology.

From my previous comments in this thread: January 30, 2016 at 5:22 pm
Compare the satellites. They’re in sun-synchronous polar orbits, designed such that for any one complete orbit they get readings at approximate local noon on the daylight side of the planet, and readings at approximate local midnight on the nighttime side of the planet. NOAA-15 has an orbital period of 101.01 minutes, or about 14.25 orbits per mean solar day. The orbits of the other sats are not such that covering the same track at roughly solar noon or midnight for any one location is guaranteed. Their orbits are all decaying and precessing at different rates.
Not only do the sats not give us a daily min/max for a single location on a daily basis, they don’t give us the same set of observations for any location on a daily basis.

What is your response? I’m most interested in the min/max part of the question here.

What is it and how is it worse than a high/low average taken of an incomplete set of non-homogeneous measurements?

I’ve been asking for your comment on the fact that sats don’t even give us a daily min/max for any given location for a couple of posts now. I consider it premature of you to be asking me about surface station homogenization while your response on satellite daily/min max is outstanding.

Chic Bowdrie
Reply to  Brandon Gates
February 4, 2016 7:33 pm

Brandon, I am enjoying the clarity of your comments, albeit lengthy. In case comments get closed here, there is a recent post about sats and Christy’s Senate testimony where we might rendezvous.
Responding to my comment about 161 W/m2 not impacting globally:

The fact that a globally averaged figure is what’s reported in the budget diagram does not mean that the supporting observations and calculations were done at such a highly-summarized resolution.

My point was to reiterate that 161 W/m2 is the diurnal average of roughly 0 W/m2 at night and a daily average solar insolation of 322 W/m2 during the day or some similar paradigm accounting for latitude and longitudinal variations.

We’re never going to be able to trace every energy flux back to the back to the Big Bang.

Whatever the circumstances causing us to arrive at present day conditions, there is a net amount of energy previously accumulated in the planet that allows us to monitor changes to the present day amount whatever it is. Then we have the changes to that amount going forward which are on average net zero. I think you get all that, I’m just restating to make sure we still on the same page.

“Steady-state equilibrium” in this context does not mean an isothermal system with constant net input/output flux of exactly zero.

I assume we’re talking about a pseudo-steady state (but never isothermal) where the net input/output flux fluctuates around zero on decadal time scales.

I think you falsely presume a lack of understanding on my part, which is fine: you can’t possibly know everything I think I understand until I write about it.

Pardon my mistake forgetting that not all AGW proponents have the same understanding about backradiation.

Since you’ve noted above that all levels of the budget cartoon net to zero, doesn’t that at least suggest your assumption of bad physics is incorrect?

Did I say bad physics? By physically unreal, I mean the tacit assumption that there is a constant source of solar energy radiating on all surfaces of the Earth at the same time. But what is more egregious in the cartoon is the long arrows with large numbers of LWIR going up and down. This leads to misinterpretation of the physics dealing with mean path lengths between collisions and the preponderance of absorption compared to emissions in the dense troposphere. It also causes misunderstandings of the 300+ W/m2 DWLR, commonly known as backradiation. The cartoon obscures the fact that the average 396 W/m2 radiating from the surface, corresponding to an average global temperature of 288K, is primarily due to the accumulated total energy of the planet. If the sun did not come up tomorrow, radiation from the surface would gradually drop as the planet cools, not go immediately to zero.

The only things riding the elevator all the way to the tropopause are sensible and latent heat. All LW radiation not in the window regions does NOT; it’s constantly hopping floors up or down, and because convection goes both ways, its net effect on radiative flux outside the window region is ZERO.

I’m quite surprise you agree with that. Same for your next two statements.

The implication then is that the system is doing more work. Think Carnot cycle.

By saying CO2 increases the rate of thermalization I wasn’t implying that the system is doing more work, and I’m not sure that is correct. Convection is globally isochoric. What’s expanding here is contracting elsewhere. There might be some non-pdV work to consider, but again it would be similarly compensated for. I’m way outside my comfort zone with this. What I meant to suggest is that more CO2 means more energy absorbed per unit time per unit volume of air and unit of available radiation. Therefore that unit parcel of air stimulates greater convection which is equivalent to faster cooling. I’ll have to learn how to say that in more thermodynamically explicit terms.

Yes I’m aware of the cooling at all levels argument. My standard response: explain Venus.

No can do. Not prepared to do that now, maybe after looking into it which will be awhile.

I consider it premature of you to be asking me about surface station homogenization while your response on satellite daily/min max is outstanding.

Sorry, but I can’t tell you what I don’t know. If the point is to expose my ignorance, mission accomplished. I presume that sats measure more frequently than twice a day and, between the 12 or so of them, cover a greater area more homogeneously. However, if sats only measure a given point twice a day at the same times, how is that better or worse than a high/low average?

Brandon Gates
Reply to  Brandon Gates
February 4, 2016 10:36 pm

Chic Bowdrie,

I am enjoying the clarity of your comments, albeit lengthy.

Thank you. That last post is on the order of 1/2 the size of the original draft, concision is admittedly not my strong-point. I appreciate you sticking to the science and challenging my arguments on their merits (or lack thereof), a rare occurrence in my experience and much to your credit.

In case comments get closed here, there is a recent post about sats and Christy’s Senate testimony where we might rendezvous.

I’ve been avoiding it (too many political threads make me cranky), but may poke my head in as I referenced it in another comment to you elsewhere.

Whatever the circumstances causing us to arrive at present day conditions, there is a net amount of energy previously accumulated in the planet that allows us to monitor changes to the present day amount whatever it is.

I assume we’re talking about a pseudo-steady state (but never isothermal) where the net input/output flux fluctuates around zero on decadal time scales.

I think that’s perfectly suitable for this discussion.

Pardon my mistake forgetting that not all AGW proponents have the same understanding about backradiation.

No worries, it happens, I do it too.

My point was to reiterate that 161 W/m2 is the diurnal average of roughly 0 W/m2 at night and a daily average solar insolation of 322 W/m2 during the day or some similar paradigm accounting for latitude and longitudinal variations. […] Did I say bad physics? By physically unreal, I mean the tacit assumption that there is a constant source of solar energy radiating on all surfaces of the Earth at the same time.

Bad physics was my translation. Here’s Trenberth, Fasullo and Kiehl (2008): http://journals.ametsoc.org/doi/pdf/10.1175/2008BAMS2634.1
There’s a box on pg 5 of the pdf called “Spatial and Temporal Sampling”. Looks like KT97 used a globally averaged 15 C temperature plugged into Stefan-Boltzmann with surface emissivity set to unity to compute outgoing LW. Further down is this:
To compute these effects more exactly, we have taken the surface skin temperature from the NRA at T62 resolution and 6-h sampling and computed the correct global mean surface radiation from (1) as 396.4 W m−2. If we instead take the daily average values, thereby removing the diurnal cycle effects, the value drops to 396.1 W m−2, or a small negative bias. However, large changes occur if we first take the global mean temperature. In that case the answer is the same for 6-hourly, daily, or climatological means at 389.2 W m−2. Hence, the lack of resolution of the spatial structure leads to a low bias of about 7.2 W m−2. Indeed, when we compare the surface upward radiation from reanalyses that resolve the full spatial structure the values range from 393.4 to 396.0 W m−2
Elsewhere they speak of datasets with 3 h temporal resolution. One specific solar reference comes from a discussion about clouds:
The chronic problems in correctly emulating the distribution and radiative properties of clouds real-istically in the reanalyses preclude those as useful guides for our purpose of determining a new global mean value. Accordingly, the most realistic published computations to date appear to be those of Zhang et al. (2004) in the ISCCP-FD dataset in which ob-served clouds were used every 3 h, at least for the solar components where the TOA view of clouds is most pertinent.
So while the latest cartoon might imply that daily means are driving all the subsequent calcs it’s pretty clear that they’ve made an attempt to take the diurnal cycle into account.

But what is more egregious in the cartoon is the long arrows with large numbers of LWIR going up and down. This leads to misinterpretation of the physics dealing with mean path lengths between collisions and the preponderance of absorption compared to emissions in the dense troposphere.

I agree with you that the arrows make it look like upwelling LW shoots straight to the tropopause and some of it is re-emitted all the way back to the surface. I also agree with you that we know that isn’t what happens because the path length at the surface is on the order of tens of meters or less. I don’t know what else to say to you other than this highlights the importance of actually reading the paper and not just looking at the pretty pictures.

It also causes misunderstandings of the 300+ W/m2 DWLR, commonly known as backradiation. The cartoon obscures the fact that the average 396 W/m2 radiating from the surface, corresponding to an average global temperature of 288K, is primarily due to the accumulated total energy of the planet. If the sun did not come up tomorrow, radiation from the surface would gradually drop as the planet cools, not go immediately to zero.

Ok, I think I’m finally beginning to wrap my head around this argument, and I agree with you in part: the upwelling LW is indeed to previously accumulated solar energy. I also agree that if the Sun blinked out that the planet would gradually cool, not immediately drop to the 2.725 K temperature of the cosmic background radiation. After that, we part ways because I don’t think the cartoon obscures that, but rather presumes the audience has enough experience with heat retention and everyday objects to not jump to that conclusion. I don’t typically appeal to common sense unless I’m trying to be a jerk, but I’m kind of at a loss for anything else to say.
I’d rather discuss the descriptions of the physics as written rather than what may or may not be implied by the diagrams, which I hold are necessarily simplified due to the complexity of the system.

“The only things riding the elevator all the way to the tropopause are sensible and latent heat. All LW radiation not in the window regions does NOT; it’s constantly hopping floors up or down, and because convection goes both ways, its net effect on radiative flux outside the window region is ZERO.”
I’m quite surprised you agree with that. Same for your next two statements.

I’m quite surprised you agree with that, I didn’t expect you to. I fear one or both of us has misunderstood the others’ previous arguments on this particular point.

By saying CO2 increases the rate of thermalization I wasn’t implying that the system is doing more work, and I’m not sure that is correct. Convection is globally isochoric. What’s expanding here is contracting elsewhere. There might be some non-pdV work to consider, but again it would be similarly compensated for. I’m way outside my comfort zone with this. What I meant to suggest is that more CO2 means more energy absorbed per unit time per unit volume of air and unit of available radiation. Therefore that unit parcel of air stimulates greater convection which is equivalent to faster cooling. I’ll have to learn how to say that in more thermodynamically explicit terms.

You may not have meant to imply more work is done, but when you say convection increases it does imply to me that more work is being done. It’s actually what I’d naively expect to happen if the surface is heated by any means, because that would increase the temperature differential between it and outer space. I say “naively” expect, because there was an interesting paper published last year and covered here which might call that into question.
Thanks for “isochoric”. Explain “non-pdV” to me, not a term with which I’m familiar.

“My standard response: explain Venus.”
No can do. Not prepared to do that now, maybe after looking into it which will be awhile.

No problem, it’s sufficient that you’ve at least considered it.

Sorry, but I can’t tell you what I don’t know. If the point is to expose my ignorance, mission accomplished.

But I explained it once already before you asked me to explain it. Aaack!!!! Not at all my intent to expose your ignorance, more just nudging you to revisit what I’ve already written — which is admittedly a bunch.

I presume that sats measure more frequently than twice a day and, between the 12 or so of them, cover a greater area more homogeneously.

Each sat is measuring almost continuously, I don’t know how many observations per orbit. Your mention of area is spot on as it were because each observation covers a pretty broad area:
http://www.remss.com/measurements/upper-air-temperature
http://images.remss.com/figures/measurements/upper-air-temperature/footprint_map_for_web.png
Figure 2. Two example scans for the MSU instrument. The satellite is traveling in the South to North direction, and scanning (roughly) West to East, making 11 discrete measurements in each scan. Footprints used to construct the near-nadir MSU products (TMT, TTS, TLS) from the top scan are shown in green. The numbers in each footprint are the weights assigned to the footprint when constructing the average for a given scan. The footprints used to construct TLT are shown in red and blue in the lower scan, with red denoting negative weight.
With multiple sats, there are multiple daily overlaps, what they do is grid everything up and take means, not unlike what the surface folks do. Differences are that the surface stations are fixed in place, and theoretically give an “exact” diurnal min/max, but not necessarily b/c different equipment has different response times, and its been shown that co-located instruments of different types give statistically different min/max readings over time. The sats only get an approximate local noon and midnight observation for any given spot, each sat slightly different deviation away from local solar time, requiring diurnal adjustments.

However, if sats only measure a given point twice a day at the same times, how is that better or worse than a high/low average?

Diurnal temperature range (DTR) is one key metric in the quiver of AGW predictions — it’s expected to decrease, i.e., minima are expected to increase at a slightly higher rates than maxima. I would expect that to show up better in the surface record than the satellite record based on methodology alone, but that may not be the case. My understanding is that estimating DTR is actually pretty tricky.
I don’t have a problem with how satellite estimates are being done per se. Multiple daily overlapping observations, gridded means — seems fine for purposes of estimating long-term climate trends. What I don’t buy is that homogeneity isn’t a problem, as is often alleged in this forum, because the literature on the sats much of which is published by the providers themselves clearly document the known homogenization issues that they’ve adjusted for. They also allow for the real possibility that some exist which they haven’t anticipated or yet detected. Surface observation literature contains all the same language.
Which is more reliable? You know my answer based on the trend uncertainty estimates. Which is better? Depends on the application. That I believe the satellite uncertainty to be higher doesn’t mean I think the data are useless.
The relatively flatter satellite trends in the “pause” era raises my eyebrows because it doesn’t match my expectations. One thing I’m keen on is how the current El Nino will play out. The next six to eight months will be interesting to me on that score.

Chic Bowdrie
Reply to  Brandon Gates
February 5, 2016 9:00 am

Brandon,

So while the latest cartoon might imply that daily means are driving all the subsequent calcs it’s pretty clear that they’ve made an attempt to take the diurnal cycle into account.

I realize that. What I didn’t realize is that taking the rotation of the planet into account doesn’t make the cartoon represent the pertinent physics any better. IOW, convection vs. radiation, etc.

After that, we part ways because I don’t think the cartoon obscures that, but rather presumes the audience has enough experience with heat retention and everyday objects to not jump to that conclusion.

I’m not the only one where that presumption has caused confusion. The cartoon implies that the additional heat to make up the 396 W/m2 comes solely from the atmosphere.

My understanding is that estimating DTR is actually pretty tricky.

Why would anyone want to estimate DTR? We need actual temperature measurements as accurate as can be, not estimates of DTR from estimates of high and low temperatures.

Which is more reliable? You know my answer based on the trend uncertainty estimates.

Trends depend on temperature measurements. Before a trend uncertainty can be used to compare measurement methodology, the accuracy (not necessarily precision?) of the methodologies must be known. I think I would trust trends from accurate satellite measurements with greater uncertainty than more certain trends from less accurate surface measurements. I think, because I would have to do some kind of error propagation analysis to be sure.

The relatively flatter satellite trends in the “pause” era raises my eyebrows because it doesn’t match my expectations.

Which is why eyebrows of skeptics are raised when they see the adjustments in the surface data making the past colder instead of eliminating the urban heat effects of the present. Doesn’t that trouble you, as well?

Brandon Gates
Reply to  Brandon Gates
February 5, 2016 9:31 pm

Chic Bowdrie,

I realize that. What I didn’t realize is that taking the rotation of the planet into account doesn’t make the cartoon represent the pertinent physics any better. IOW, convection vs. radiation, etc.

I was surprised by that as well. Way I’m reading it, what they found out since ’97 is that the regional errors mostly cancel out.
At this point it’s not clear to me that you have any objections about the physics behind the cartoon, only its presentation.

I’m not the only one where that presumption has caused confusion. The cartoon implies that the additional heat to make up the 396 W/m2 comes solely from the atmosphere.

I just don’t understand that argument, sorry. It’s literally clear as day to me that insolation is the only input considered and that net LW flux at the surface is negative (a net loss). IOW, exactly what my understanding of “correct” physics would suggest.

Why would anyone want to estimate DTR?

Just as I said last post. Nighttime temps are expected to increase faster than daytime temps under GHG-forced warming.

We need actual temperature measurements as accurate as can be, not estimates of DTR from estimates of high and low temperatures.

I don’t see those as mutually exclusive in principle. In practice, for most historical surface data, min/max is all we’ve got. As technology progressed, we’ve been obtaining more and more hourly and sub-hourly data: https://www.ncdc.noaa.gov/data-access/land-based-station-data
Click on the “Integrated Surface Hourly Data Base (3505)” link, goes to an FTP folder with a bunch of data. I’ve no idea how extensive it is, or how it’s being used. I think it would be ideal if all surface data were hourly, and reported a min/max. We’d still be stuck with the legacy data however. We can only use what we have.

Trends depend on temperature measurements. Before a trend uncertainty can be used to compare measurement methodology, the accuracy (not necessarily precision?) of the methodologies must be known.

For long-term climate, I think precision is more important. Most important is lack of bias. If I have a thermometer that consistently reads 1 degree cold +/- 0.1 degrees, and is “known” to have not drifted from its initial bad calibration in terms of accuracy, then I don’t care about its absolute value for purposes of creating an anomaly time series.
Different story for modelling, we need to know the accurate absolute temperature.

I think I would trust trends from accurate satellite measurements with greater uncertainty than more certain trends from less accurate surface measurements.

So would I, but how do I know which is more accurate? How do you? You keep saying satellites are, how do you know?

I think, because I would have to do some kind of error propagation analysis to be sure.

I’m pretty sure you’re correct. Autoregression and co-variance matrices pop up a lot in literature on homogenization.

Which is why eyebrows of skeptics are raised when they see the adjustments in the surface data making the past colder instead of eliminating the urban heat effects of the present. Doesn’t that trouble you, as well?

Only until I did my homework. There are two punchlines: BEST, and the fact that SST adjustments are net-cooling. Busting “The Pause that never was” exercise of Karl et al. (2015) failed to put a dent in the long-term net adjustment. Going by magnitude of change, even Karl (2015) is less eyebrow-raising than what UAH TLT v5.6 to v6.0beta did post-2000, and they still haven’t passed peer-review OR released code yet. If it weren’t for RSS, that would be sufficient to throw UAH under the bus by way of the raised-eyebrow criterion.

Chic Bowdrie
Reply to  Brandon Gates
February 8, 2016 9:20 pm

Brandon,
Sorry about the delay. I had to catch up on things.

I just don’t understand that argument, sorry. It’s literally clear as day to me that insolation is the only input considered and that net LW flux at the surface is negative (a net loss). IOW, exactly what my understanding of “correct” physics would suggest.

Just when I thought we were on the same page, you lost me again. How can the LW flux at the surface be a net loss? I would expect you to claim a slight gain with the excess going into the ocean or being the cause of the alledged surface warming that replaced the alleged pause that wasn’t.

Nighttime temps are expected to increase faster than daytime temps under GHG-forced warming.

That may be the case, but that doesn’t mean that average daily temperature is actually increasing. A high-low average is biased towards warming. The only way to know if the actual temperature is rising is to make frequent measurements, ideally infinitely many.

For long-term climate, I think precision is more important. Most important is lack of bias. If I have a thermometer that consistently reads 1 degree cold +/- 0.1 degrees, and is “known” to have not drifted from its initial bad calibration in terms of accuracy, then I don’t care about its absolute value for purposes of creating an anomaly time series.

Give this some more thought. Precision is desirable, but accuracy is king. Bias is lack of accuracy. Thermometer accuracy is not the only concern. The whole urban heat island issue involves artificially high temperature readings, aka bias, aka lack of accuracy. It doesn’t matter how consistently wrong a reading is, if it’s wrong, it biases the results.

I think I would trust trends from accurate satellite measurements with greater uncertainty than more certain trends from less accurate surface measurements.
So would I, but how do I know which is more accurate? How do you? You keep saying satellites are, how do you know?

If I jumped to the conclusion satellites are more accurate, I shouldn’t have. I haven’t looked into the methodology that closely. The way to test is to take data from locations known to be free of urban heat effects and make sufficient measurements to eliminate diurnal bias. Then the satellite measurements must cover those same areas. Of course this is still comparing apples and oranges. Radiosome data at the same locations might be able to help synchronize the data somehow. It’s possible that there is a real, ie accurate, actual difference between these measurements that changes from place to place. For example, land vs. sea, flatland vs. mountains. This all makes evaluating trends complicated. But without this baseline info, how can trend comparison be definitive?

Brandon Gates
Reply to  Brandon Gates
February 9, 2016 7:29 pm

Chic Bowdrie,

How can the LW flux at the surface be a net loss? I would expect you to claim a slight gain with the excess going into the ocean or being the cause of the alledged surface warming that replaced the alleged pause that wasn’t.

I do claim a slight gain, but not so much that net LW goes positive. For that to happen, the atmosphere would need to be warmer on average than the surface, which it clearly is not, implying that CO2 is a source of energy, which it also clearly is not.
Let’s look at the cartoon again:comment image
Net LW flux at the surface, clearly negative, 40 W/m^2 alone due to the atmospheric window. Tally up all the other fluxes except the Sun, and you get -160 W/m^2. Absorbed solar is 161 W/m^2, net balance = +1 W/m^2.
Sun heats the surface, GHGs reduce the rate of loss. Temperature rises until the radiative balance is restored. I do believe most of the excess is going into the oceans:
http://climexp.knmi.nl/data/iheat2000_global.png

5.47E21 J/yr / 3.16E07 s/yr = 1.73E14 J/s (W)
1.73E14 W / 5.10E14 m^2 = 0.34 W/m^2

As you cited earlier, Stephens (2013) gives 0.6 +/- 0.4 W/m^2, so with just over half of the mass of the oceans considered, we’re already inside the error bounds and over halfway to the central estimate.
Compare HADCRUT4 over 1957-2015 as a proxy for net atmospheric temperature change:

0.0130 K/yr * 5.18E+18 kg * 1.005 kJ/kg K * 1,000 J/kJ = 6.72E19 J/yr
6.72E19 J/yr / 3.16E07 s/yr = 2.13E12 J/s (W)
2.13E12 W / 5.10E14 m^2 = 0.004 W/m^2

That’s a rounding error no matter whether we’re estimating air temperature change with surface thermometers, balloon-borne thermometers or orbiting microwave sounding units. But noting that 0.004 / 0.6 = 0.7%, it’s within reach of what the IPCC say we should expect (I do this from memory), and multiplying 0.6 W/m^2 by these percentages:

0.93 oceans 0.558 W/m^2
0.05 land        0.030 W/m^2
0.01 latent heat 0.006 W/m^2
0.01 atmosphere  0.006 W/m^2
----
1.00 total       0.600 W/m^2

For sake of argument, I’ll assume all my missing heat is going into the oceans …

5.47E21 J/yr * 0.558 W/m^2 / 0.34 W/m^2 = 8.98E21 J/yr
8.98E21 J/yr / 1,000 J/kJ / 4.006 kJ/kg K / 1.35E21 kg  = 0.0017 K/yr
0.0130 K/yr / 0.0017 K/yr = 7.8

… implying that surface temperatures are rising about 8 times faster than the vertical average ocean temperature.
Now, NOT assuming all the missing heat is going into the oceans, I get a ratio of 7.6:1 surface/ocean temperature.
From Bintanja (2008) I expect the ratio to be 5:1. The higher ratios of 7.6:1 observed (or 8:1 calculated) combined with an estimated imbalance of 0.6 W/m^2 plus the well-constrained physical parameters of ocean water and atmosphere strongly suggests to me that oceans are in fact absorbing most of the heat and are the main reason for the lag between increased forcing and final (pseudo-)equilibrium temperature.
In short, as is often said, there’s more warming “in the pipeline” even if CO2 levels stabilized tomorrow …
… but only, of course, if the observations above are reasonably correct, I didn’t badly screw up any math and long-standing theories of radiative physics aren’t completely wrong.

“Nighttime temps are expected to increase faster than daytime temps under GHG-forced warming.”
That may be the case, but that doesn’t mean that average daily temperature is actually increasing.

Minima and maxima are both increasing. It’s difficult to imagine the mean not going up as well.

A high-low average is biased towards warming.

Why?

The only way to know if the actual temperature is rising is to make frequent measurements, ideally infinitely many.

Never going to happen. Already NOT happening with satellite data, which you do trust.

Precision is desirable, but accuracy is king.

Depends on intended application as I’ve explained previously. If you need absolute temperature, accuracy is absolutely king.

Bias is lack of accuracy.

I though we agreed that bias in this application is a drift in the absolute reading over time. If the initial calibration was wrong, so long as the thing doesn’t drift from that wrong calibration, precision is what I’d most care about for purposes of trend calculation.

The whole urban heat island issue involves artificially high temperature readings, aka bias, aka lack of accuracy.

It’s the increased urbanization which is the problem in UHI — change in surroundings over time. You can have UHI in Minneapolis, which is a damn sight cooler on average than a remote atoll in the tropical Pacific which has never so much as had an outhouse built on it.

It doesn’t matter how consistently wrong a reading is, if it’s wrong, it biases the results.

Again, that’s only true if we care, or only care, about absolute temperature. The slope of two parallel lines is exactly the same no matter what their y-intercept is. Temperature anomaly products are not averages of mean absolute temperatures, but mean anomaly from a baseline average for each series of observations.

If I jumped to the conclusion satellites are more accurate, I shouldn’t have. I haven’t looked into the methodology that closely.

Spencer et al. (1990) may be a good place to start: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281990%29003%3C1111%3AGATMWS%3E2.0.CO%3B2
They report precision of +/-0.03 K for two-day hemispheric averages, and +/-0.01 K for one month means. If they report what the typical precision is for a single grid, I did not see it. As you dig in, you’ll find that much depends what the satellite is flying over, implying that not all grids are equal from the standpoint of precision.

The way to test is to take data from locations known to be free of urban heat effects and make sufficient measurements to eliminate diurnal bias.

I agree. However, going back to 1850 and doing it over proper by any definition of “proper” isn’t an option.

Then the satellite measurements must cover those same areas.

Clearly not an option prior to 1979. And we’re still humped when it comes to the poles with satellites, IIRC because not enough is known about surface emissivity in those areas to correct for surface returns as they do everywhere else.

Of course this is still comparing apples and oranges.

Yes, that’s what I’ve been saying.

Radiosome data at the same locations might be able to help synchronize the data somehow. It’s possible that there is a real, ie accurate, actual difference between these measurements that changes from place to place. For example, land vs. sea, flatland vs. mountains. This all makes evaluating trends complicated.

I very much agree.

But without this baseline info, how can trend comparison be definitive?

That all depends on what your standard of definitive is. If you need > 6-sigma certainty, which I understand to be the standard in high-energy physics, I’m pretty sure you’re not gonna get it any time soon. I go by the error estimates and say, this is what we think is true within that envelope … same as I would any other complex science.

Chic Bowdrie
Reply to  Brandon Gates
February 10, 2016 6:42 am

Brandon,
Net LW loss, my bad. The incoming SW balances with a net gain.

Sun heats the surface, GHGs reduce the rate of loss.

IR active gases absorb the radiation, but cannot reduce the rate of loss. How could they? Assuming no change in incoming solar SW, there still must be same rate of LW emitted to space. Think about it. Take away the whole atmosphere. As long as the incoming radiation doesn’t change, the outgoing won’t change.

Temperature rises until the radiative balance is restored.

Only when there is a net incoming imbalance. And it’s not yet proven that IR gases cause most, if any, imbalance. It could be all solar insolation. The presence of an atmosphere and its composition does warm the surface to some degree compared to no atmosphere, but the equilibrium rate of energy loss is solely dependent on the incoming solar.

I do believe most of the excess is going into the oceans.

And that excess could be totally due to solar insolation, not necessarily any increase in CO2. These things have to be proven.

From Bintanja (2008) I expect the ratio to be 5:1. The higher ratios of 7.6:1 observed (or 8:1 calculated) combined with an estimated imbalance of 0.6 W/m^2 plus the well-constrained physical parameters of ocean water and atmosphere strongly suggests to me that oceans are in fact absorbing most of the heat and are the main reason for the lag between increased forcing and final (pseudo-)equilibrium temperature.

I wouldn’t put too much stock in those numbers. The 0.6 is highly speculative. If you assume the sea surface temperature trend to be twice the vertical average ocean temperature trend which is more realistic and estimate the surface warming trend to something closer to a satellite warming trend which could be ¼ as much, you get both sea and land increasing at same rate.

Minima and maxima are both increasing. It’s difficult to imagine the mean not going up as well.

That’s where your diurnal bias comes in. The rest of the day temperatures don’t necessarily rise just because the low does.
I say that a high/low average is usually biased towards warming. You ask why. I haven’t figured that out yet. Maybe you can after seeing some data and answering the question “Since automated sensors report information every hour, how does a simple average of these values compare with the average of the high and low daily temperature?” asked here: http://mesonet.agron.iastate.edu/onsite/features/cat.php?day=2009-07-31
The main point here is that the more data points the better. Of course infinitely many points is only a theoretical limit, the integral of the continuous change in temperature over a 24 hour period.

“Precision is desirable, but accuracy is king.”

Depends on intended application as I’ve explained previously. If you need absolute temperature, accuracy is absolutely king.

I’m losing patience over the precision vs. accuracy argument. I thought I already explained error propagation shows that you can bias a trend up while maintaining the same std error of the trend. Said another way, precision determines the std error of the trend, but says nothing about the accuracy of trend. You can get very tight data obtained inaccurately. I don’t know how to make that clearer.

I though we agreed that bias in this application is a drift in the absolute reading over time. If the initial calibration was wrong, so long as the thing doesn’t drift from that wrong calibration, precision is what I’d most care about for purposes of trend calculation.

Of course, if a calibration doesn’t drift, then there’s no drift. It’s the calibration that does drift, or any other drift, that adds bias over time.

It’s the increased urbanization which is the problem in UHI — change in surroundings over time. You can have UHI in Minneapolis, which is a damn sight cooler on average than a remote atoll in the tropical Pacific which has never so much as had an outhouse built on it.

UHI is a gradual bias over time. It makes a trend go up with time. It doesn’t matter if it’s the coldest spot on Earth or the warmest, it makes a trend steeper than it would otherwise be.

Again, that’s only true if we care, or only care, about absolute temperature. The slope of two parallel lines is exactly the same no matter what their y-intercept is. Temperature anomaly products are not averages of mean absolute temperatures, but mean anomaly from a baseline average for each series of observations.

Using anomalies doesn’t justify ignoring drifts that affect trends. So if one set of measurements over time is subject to an increasing deviation from the true values, i.e. a calibration drift or an UHI effect, then the trend is less correct, less accurate, therefore biased, relative to the true trend. Precise measurements that gradually drift cannot be preferred over true measurements unless you are purposefully intending to pass off the biased trend as real.

Clearly not an option prior to 1979. And we’re still humped when it comes to the poles with satellites, IIRC because not enough is known about surface emissivity in those areas to correct for surface returns as they do everywhere else.

Why would you even consider doing a retrospective study? I was proposing well-controlled future experiments that would determine how accurate and precise surface measurements are relative to satellite measurements. That provides ability to judge which data sets, if either, are best. Meanwhile, one remains confirmationally biased towards one or the other.

Chic Bowdrie
Reply to  Brandon Gates
February 10, 2016 11:25 am

Brandon,
Now I think I understand why the high/low average is usually biased towards warming. The rate of temperature rise in the morning will generally be faster than cooling later in the day. But the cooling rate decelerates at night causing a simple high/low temperature average to be greater than an average of hourly readings. The greater the deceleration, the greater the resulting high/low average bias.
If this bias remained the same year to year, then the long term trends wouldn’t be affected. However, let’s say that increasing CO2 or water vapor causes long term diurnal warming or cooling rates to change. These changes could bias the trends, because the high/low average daily temperatures would not reflect the truer daily temperatures calculated from the average of hourly readings.

Chic Bowdrie
Reply to  Chic Bowdrie
January 28, 2016 2:25 pm

Mod, the earlier version was posted on the TEST page. That version can be deleted.

John Whitman
January 27, 2016 10:47 am

From the paper ‘A Model-Based Approach to Climate Reconstruction Using Tree-Ring Data’ by Schofield, Barker, Gelman, Cook & Briffa (Journal of the American Statistical Association),
{bold emphasis mine – John Whitman}
7.1 Message for the paleoclimate community
We have demonstrated model-based approaches for tree-ring based reconstructions that are able to incorporate the assumptions of traditional approaches as special cases. The modeling framework allows us to relax assumptions long used out of necessity, giving flexibility to our model choices. Using the Scots pine data from Tornetr¨ask we show how modeling choices matter. Alternative models fitting the data equally well can lead to substantially different predictions. These results do not necessarily mean that existing reconstructions are incorrect. If the assumptions underlying the reconstruction is a close approximation of reality, the resulting prediction and associated uncertainty will likely be appropriate (up to the problems associated with the two-step procedures used). However, if we are unsure whether the assumptions are correct and there are other assumptions equally plausible a-priori, we will have unrecognized uncertainty in the predictions. We believe that such uncertainty should be acknowledged when using standardized data and default models. As an example consider the predictions from model mb ts con for Abisko, Sweden. If we believe the assumptions underlying model mb ts con then there is a 95% probability that summer mean temperature in 1599 was between 8.1 ◦C and 12.0 ◦C as suggested by the central credible interval (Figure 4(a)). However, if we adopt the assumptions underlying model mb ts spl pl we would believe that the summer mean temperature in 1599 may have been much colder than 8.1 ◦C with a 95% credible interval between 4.1 ◦C and 7.8 ◦C. In practice, unless the data are able to discriminate between these assumptions (which they were not able to do here as shown in Section 6), there is more uncertainty about the summer mean temperature in 1599 than that found in any one model considered. We believe that such model uncertainty needs to be recognized by the community as an important source of uncertainty associated with predictions of historical climate. The use of default methods makes evaluation of such uncertainty difficult.”

The message to the paleoclimate community should also be the message to global climate modeling community who create the models (ie GCMs and related models) endorsed by the IPCC and used in the IPCC’s assessments reports.
Climate modeling with explicitly specified inherent uncertainties, like discussed in the article, gives one a significantly broader perspective on the relevance of change in phenomena observed in the EAS**.
** EAS – Earth Atmosphere System
John

Resourceguy
January 27, 2016 11:02 am

Question. What do you get when you cross a bad surface temperature station with a bad tree proxy? Answer…..
A. Paper mache science models
B. Sticks for hockey
or C. None of the above for purposes of science

randy
January 27, 2016 11:10 am

As someone who grows a wide range of trees I am not even a little convinced you can get useful temp data out of treerings. There is a wide range of variables to grow a tree well. Even just the lay of the land the tree is growing in can have a massive difference in outcome. as well as spacing. You have a bunch of crammed trees and one with space, the one that has more space will grow much better and look like the outlier when most of the other trees would have been as vibrant if they also had adequate room (light, competition for nutrient and water etc) So even if the assumptions you pick end up fitting with reality you will almost certainly be building un forseen biases into your methodology when you take those assumptions and apply it to a set of trees from a different era or area. (or species etc etc) Trees also appear to take time to “bounce back” from the extremes. not as relevant with all species in all areas but a places like the high desert where I live trees will grow slower even if in optimum conditions for a few years after SPRING dry periods. They really want that water specifically in spring here. A few weeks off even if it ends up very wet will leave the trees in a much slower growing state. Which means you could have what was overall a wet year with the rains being sparse at a key time in the year make the records look like it was cool for several years in a row when it might have been unusually warm and even wet the whole time. I highly doubt you could ever quantify all these variables when we only have the trees themselves in many cases as a record where a wide range of variables can show up as the same data.

randy
Reply to  randy
January 27, 2016 11:16 am

This goes the other way as well. You can have good spring water and a marginal year overall and the high desert trees will be doing very well. Im not sure how this could be a measurement for temp at all when it wont even tell you how much water it got, main factor being water in a very specific time frame. couple that with how differently trees grow based on their spacing and Id be only slightly less likely to believe you have magic powers then an accurate temp line out of tree rings.

Brandon Gates
Reply to  randy
January 28, 2016 10:12 am

randy,

“Difficult does not mean impossible.”
I expect it does in this case actually, assuming you want accurate results.

While I can’t speak for everyone involved, I want the most accurate (and precise) results possible. That qualifier at the end is key. Something else I want is results based on multiple lines of evidence, which is even more key.

I have grown trees of many species for decades now. I can list factors independent of temps that will have trees growing rather well in a given year or others that can stunt them for several years in a row.

I commend you on your gardening prowess, something which I lack in spades. It so happens my freshman biology prof was a botanist by trade, and he’d probably turn over in his grave if he knew how much of his flora-centric lectures I’ve forgotten, or that I couldn’t keep a plant alive if my own life depended on it. One thing I did retain is that plants are sensitive to many other environmental factors than temperature. I don’t wish to overtly offend here when being mildly rude should suffice, but I think it’s preposterous to imply that dendroclimatologists know less than an avid arborist and an ex-biology student who grew up to be a database programmer instead. Just sayin’.

With no other data besides the tree rings how in the world could you tell which of many factors lead to good or poor growth? Magic?

Magic would be pretty much the only resort, and I don’t think it would work very well. You presume no other information, which is curious. Have you actually read MBH98?
http://www.meteo.psu.edu/holocene/public_html/shared/articles/mbh98.pdf
Any works since then?

statistical analysis might make you believe you have good data, but you cant separate different variables that cause the same things from eachother just because you decided your model had merit.

True, but again you make presumptions about methods which aren’t well-founded. Schofield et al. here make a far more informed critique than either you or I have, and do find that choice of assumptions and model affect results. Unlike you, they do not conclude, “holy cow, this crap is completely useless!”

What about the other 6-9 months of the year? In my area its 8-9 months a given ear with no tree growth, what then? spring and summer temps to NOT directly correlate to what fall and winter temps will be. This last point by itself should put the whole field to bed.

No, that last point does not put the whole field to bed for me. Data are data, so if I have a proxy which only tells me three months out of the year, that’s still three months I wouldn’t otherwise have. We do have better off-season proxies, ice cores come to mind, and for what it’s worth, they’re quite popular with dbstealey (he really likes Alley (2000) for some reason). They, however suffer from poor spatial and temporal resolution as compared to tree rings, which give very nice annual resolution and are found across much more of the landscape than ice, which tends to melt annually where trees grow.
It would be nice to have all the data we want, at whatever arbitrarily high precision and resolution we could reasonably (or unreasonably) demand, but we don’t. In the scenario where indicators are sparse and — relative to modern instrumentation — imprecise, chucking aside every proxy which doesn’t meet our exacting standards pretty much winds up with us having nothing to go on at all.
Individually, all proxies are relatively weak indicators. In concert, their consilience allows stronger conclusions. You really need to read MBH98. It’s not hard to notice that it ain’t all about treemometers. I daresay one would have to deliberately attempt to miss that fact.

We dont have ANY data for 3/4 of the year and a dozen variables can alter the trend for the period we do have data but we totally got accurate results!!! pffft. MAGIC!!!

I’ve not ever read a paleoclimate paper that does not contain estimated uncertainties, no matter what mix of proxies were used. They often say their conclusions are “robust” but that does not mean “totally accurate”.
Next time you catch a dendro researcher testifying to Congress, “I think we should look at the tree data, they’re the best data we have”, then you can shove that under my nose and let me smell the hubris. Then I might actually agree with you that it smells like BS.

Svend Ferdinandsen
January 27, 2016 11:12 am

The only thing tree rings can tell is how beneficial the climate was for the type of trees investigated.
If a good climate or bad climate is the same for humans is not known. And it is not at all defined what a good climate for humans is, so it is all up to some creative peoble to to make up their own assumptions, and they all end up in that it is worse than we thaught, because that’s what makes headlines.
Where are the papers that scientifically defines what influences the variation in tree rings?
I have not seen them, but they may exist.

Kev-in-Uk
Reply to  Svend Ferdinandsen
January 27, 2016 11:44 am

Svend, I’m sure everyone involved in tree ring analysis is well aware of the possible variables that affect growth, of which there must be a great many. Unfortunately, it seems confirmation bias tends to sway both research direction (in this case as temperature proxies) and of course any findings or conclusions.
As far as I recall, (as it is from many years ago) – initial tree ring works were primarily for dating and cross matching pieces of timber from archeological digs? (but I may be wrong?). Wasn’t the objective to have crossover tree ring measurements from ‘current’ or known ‘cut’ aged trees, overlaid with gradually older and older trees from the same region so that eventually a long but local ‘sequence’ of annual tree ring growth could be used as a template to date older timbers? In any event, I don’t recall them initially being used as actual proxies for anything ‘directly’, other than obvious periods when growth was good or was stunted (as you say)! The assumption of temperature, rainfall, soil erosion, whatever, type effects appears to have been an ‘added’ bonus to tree ring analysis, despite the numerous possible combinations of variables they may actually display.
I too would like to see the science behind the proxy derivations – but based on what I have seen to date, it seems mostly on assumptions and statistical modeling! Then again, even the surface datasets are proxies in ‘real’ scale terms, and of course, are subjected to rigourous (sorry, perhaps tortuous?) statistical treatment!

RACookPE1978
Editor
Reply to  Kev-in-Uk
January 27, 2016 11:53 am

Kev-in-Uk

Wasn’t the objective to have crossover tree ring measurements from ‘current’ or known ‘cut’ aged trees, overlaid with gradually older and older trees from the same region so that eventually a long but local ‘sequence’ of annual tree ring growth could be used as a template to date older timbers? In any event, I don’t recall them initially being used as actual proxies for anything ‘directly’, other than obvious periods when growth was good or was stunted (as you say)! The assumption of temperature, rainfall, soil erosion, whatever, type effects appears to have been an ‘added’ bonus to tree ring analysis, despite the numerous possible combinations of variables they may actually display.

Yes.
Archeologically, the tree rings were (still are!) absolutely invaluable in dating the objects buried at the same levels (or below) the wood relics “dated” by their tree rings. Dendrochronology, as mentioned above and below by others earlier. It’s very accurate on a year-by-year basis.
Also, Once a tree ring sequence can be dated, then its carbon 14 decay sequence can be calibrated for many years of carbon rings. Once that is done, then any wood item from nearly any “recent” archaeological (human-built) dig can be used to date the items around the wooden remnant. (You don’t need a wood item with visible and unique “tree rings” you only need a small part of the wood itself.) Now, carbon 14 decays rapidly, so they can’t go too far back with it, but it is a very important start. And a bridge to other technologies.

Proud Skeptic
January 27, 2016 12:07 pm

It would be hard to convince me that tree rings can be used for anything other than getting a rough idea of what the growing season was at the time they were formed.
The idea that you can narrow it down to temperature (as opposed to growing conditions) is a stretch and the idea that you can get any kind of useful accuracy out of them is (IMHO) absurd.

Cgy Rock Dr.
January 27, 2016 12:22 pm

Models are not reality …. Models are not Theory …. Models are simplistic metaphors which may or may not be directionally useful.

January 27, 2016 12:24 pm

The idea of a treemometer is faintly ridiculous. Tree rings typically vary from one side of a tree to the other. Which side is the thermometer?
Even when comparisons can be made, the correlation is much greater to CO2:comment image

Kev-in-Uk
Reply to  dbstealey
January 27, 2016 1:04 pm


that’s interesting – but how does anyone know (as in with a reasonable degree of certainty)? I mean, why/how has it been deduced that correlation to CO2 is better than to temperature? what about rainfall, sunlight, flooding, etc, etc. It kind of strikes me that unless you know the detailed climatic and geological conditions around a specific tree location (being analysed) you still have to make inferences about what the changes may mean?
If I take a medieval deciduous forest, for example, and then imagine that a bunch of Vikings moved in and stripped out all the Pine (birch/whatever ?) trees, leaving, say, the Oaks behind, surely that would have an effect on the remaining Oaks growth (more sunlight) – but unless you have detailed knowledge of the ‘local’ changes, you could (in a palaeo sense) assign those changes to CO2, rainfall, temperature or whatever – and you’d potentially be wrong? So how would you be able to know? Ok, so tis is an anthropogenic example, but you get the point. I get it that many proxies can be cross correlated, but local effects MUST be different and could be themselves cross related. So for example, Greenland presumably would have been logged when the Vikings got there, and the ring growth in some trees (left standing) would be shown to be good due to increased sunlight, etc- this therefore does not automatically reflect the MWP. So, even as a AGW skeptic (who accepts the MWP), I find that tree rings are difficult to accept as real proxies in any direct demonstrable sense. Dunno if I’ve explained it well enough – but anyway thats just my view…

Reply to  Kev-in-Uk
January 27, 2016 2:28 pm

Hi Kev-in-UK,
You asked:
…why/how has it been deduced that correlation to CO2 is better than to temperature?
My answer: I don’t know. The bar graph I posted has references to a lot of peer reviewed papers (at the bottom of each bar). I’ve not read them. I don’t put much stock in that methodology. Maybe it works. Like you, I think there are better proxies.
I was only trying to show that CO2 matters more than temperature. I agree with you that all those other variables matter, too. I’ve read comments here from readers who have observed that trees grow much faster when adjoining trees are removed, and similar situations. So yes, very local changes can make a big difference. Like you said, who knows what happened back then?
I think tree rings are only being discussed because they’ve been so mis-used as temperature proxies. Mann’s and Briffa’s treemometers purport to measure fractions of a degree nearly a thousand years ago. I have a very hard time taking that seriously.
When Mann put together his bogus hockey stick chart, he was attempting to erase the MWP and the LIA. Anyone who thinks that was honest in the least has no moral compass. And of course, falsus in uno, falsus in omnibus. They keep promoting the lie because it has brought them fame, fortune, expense paid holidays in choice locations, pats on the head as their university’s rainmaker, and all kinds of feel-good feedback.
They’re still being dishonest. I have no doubt they know it, too.

Proud Skeptic
Reply to  Kev-in-Uk
January 27, 2016 4:49 pm

Great! There seems to be a growing “consensus” that the data underlying most of this science doesn’t past the smell test. It’s time to stop arguing around the fringes and go right for the heart of this thing.
1. They can’t measure the Earth’s temperature now.
2. They can’t claim that they can measure the temperature from 75 years ago.
3. They have no way of knowing that the “proxy data” has any meaning at all.
4. Feel free to substitute CO2 in the above. It is the same situation.

Brandon Gates
Reply to  dbstealey
January 27, 2016 1:28 pm

dbstealey,

The idea of a treemometer is faintly ridiculous.

I suppose it is if one insists that plant growth isn’t sensitive to temperature.

Tree rings typically vary from one side of a tree to the other. Which side is the thermometer?

Alcohol and mercury have different thermal expansion coefficients, yet both have been successfully used in thermometers.

Even when comparisons can be made, the correlation is much greater to CO2:

Even when comparisons can be made? You just got done saying that the idea of a treemometer is faintly ridiculous. And you top it off by implying that correlation is causation. You’re slipping, mate.
For sake of argument only, let’s suppose that temperature and CO2 are the only things which affect tree ring width from year to year. We find that CO2 and tree ring width correlate very well, something which is hardly surprising to people whose business it is studying plants. So we model that relationship, which gives us a prediction of tree ring width for a the level of CO2 in any given year, and subtract that from the actual observed width in that same year. We regress that residual against temperatures from a more accurate and precise source such as a “real” thermometer and use that correlation to build the final model of tree ring width as a function of CO2 and temperature combined. We’ll almost certainly end up with a residual error for predicted tree ring width in the final model, which is an indicator of uncertainty.
In the real world application, there are significantly more confounding factors to the temperature signal we’re looking for, which means we need to use a multi-variate statistical model to suss it out, of which there are many, and many of which were NOT pioneered by dendroclimatologists. What Matthew et al. are saying is that results are sensitive to which statistical model is used, which is another indication of uncertainty in the results.
None of this is either mysterious or particularly surprising. I agree with others here who have said in effect that it is good and proper science for the dendro community to be self-skeptical of the robustness of its own results as they have been here.

Reply to  Brandon Gates
January 27, 2016 1:55 pm

Gates,
Those are some of your sillier arguments.

Brandon Gates
Reply to  Brandon Gates
January 27, 2016 2:15 pm

dbstealey,

Those are some of your sillier arguments.

That’s one of your lamest “rebuttals”. But thanks for playing.

Reply to  Brandon Gates
January 27, 2016 2:41 pm

Gates,
I didn’t think it was necessary to explain, but I’m on the side of those who think ‘treemometers’ are about the least accurate temperature instruments in creation (treelines excluded). You’re on the other side of the argument, for the simple reason that you promote the Mannian narrative. I’ll leave it at that, because the winner of this particular argument is obvious to even the most casual observer.

MarkW
Reply to  Brandon Gates
January 27, 2016 2:44 pm

Gates, I realize that you specialize in changing the subject rather than actually arguing for comprehension, but even by your low standards, the above post was pathetic.
Everyone recognizes that temperature affects tree growth. The issue is, as it has always been, the fact that there are many things that affect tree growth and it is impossible to tease temperature alone out of the data.
Your second point about thermal coefficients is totally unresponsive to the question. The question asked about growth rates, not thermal expansion. Such stupidity should be embarrassing, but it’s obvious that your devotion to your religion has rendered you incapable of such human emotions.
Third point, no he’s not moron. He’s just commenting that from time to time the gods of randomness line up and tree rings manage to match the temperature record. He said nothing about whether that match up was meaningful or just a temporary correlation.

Proud Skeptic
Reply to  Brandon Gates
January 27, 2016 4:52 pm

That’s a fine case for arguing that tree rings roughly reflect the favorability of growing conditions, which include CO2 and warmth (among others). It is certainly not an argument that you can determine the historic temperature record to a tenth of a degree in a certain location.
It’s ridiculous on the face of it.

Brandon Gates
Reply to  Brandon Gates
January 27, 2016 5:12 pm

dbstealey,

I didn’t think it was necessary to explain …

Your premise that my comments were “silly” implies that I at least require an explanation.

… but I’m on the side of those who think ‘treemometers’ are about the least accurate temperature instruments in creation (treelines excluded).

I don’t categorically disagree with you on that point. They do have the advantage of discrete annual temporal resolution which cannot be said of other proxies, not even ice cores.

You’re on the other side of the argument, for the simple reason that you promote the Mannian narrative.

You’re on the other side of the argument, for the simple reason that you disparage the Mannian narrative. Zero-sum argument.

I’ll leave it at that, because the winner of this particular argument is obvious to even the most casual observer.

The more casual the better, apparently. Try again.

Brandon Gates
Reply to  Brandon Gates
January 27, 2016 5:50 pm

MarkW,

Everyone recognizes that temperature affects tree growth.

Maybe so …

The issue is, as it has always been, the fact that there are many things that affect tree growth and it is impossible to tease temperature alone out of the data.

Read this again: In the real world application, there are significantly more confounding factors to the temperature signal we’re looking for, which means we need to use a multi-variate statistical model to suss it out, of which there are many, and many of which were NOT pioneered by dendroclimatologists.
Difficult does not mean impossible.

Your second point about thermal coefficients is totally unresponsive to the question. The question asked about growth rates, not thermal expansion.

Both thermal expansion coefficients of liquids (and/or metals) and tree-ring width are proxies used for estimating temperature. That’s the commonality. The difference is that an alcohol thermometer enjoys a much higher signal to noise ratio (fewer confounding factors), and is therefore a more precise instrument for measuring temperature. It could very well be that one side of the tree is a better proxy for temperature than the other when ring size is significantly different, but we may not know that from just observing that there’s a size differential from one side of the trunk to the other.
The direct response to Stealey’s question, “which side is the thermometer” could conceivably be answered by comparing both sides to the temperature reference and choosing the one which results in the least error. I’m sure there’s quite a bit more to it than that, but I think what I just described the most obvious step.

Such stupidity should be embarrassing, but it’s obvious that your devotion to your religion has rendered you incapable of such human emotions.

My daily dose of irony.

Third point, no he’s not moron. He’s just commenting that from time to time the gods of randomness line up and tree rings manage to match the temperature record.

Everyone recognizes that temperature affects tree growth.
… or maybe not.

He said nothing about whether that match up was meaningful or just a temporary correlation.

He said: Even when comparisons can be made, the correlation is much greater to CO2
Please don’t move the goalposts and then claim after the fact that I did.

randy
Reply to  Brandon Gates
January 28, 2016 12:16 am

“Difficult does not mean impossible.”
I expect it does in this case actually, assuming you want accurate results. I have grown trees of many species for decades now. I can list factors independent of temps that will have trees growing rather well in a given year or others that can stunt them for several years in a row. With no other data besides the tree rings how in the world could you tell which of many factors lead to good or poor growth? Magic? statistical analysis might make you believe you have good data, but you cant separate different variables that cause the same things from eachother just because you decided your model had merit.
What about the other 6-9 months of the year? In my area its 8-9 months a given ear with no tree growth, what then? spring and summer temps to NOT directly correlate to what fall and winter temps will be. This last point by itself should put the whole field to bed.
We dont have ANY data for 3/4 of the year and a dozen variables can alter the trend for the period we do have data but we totally got accurate results!!! pffft. MAGIC!!!

Reply to  Brandon Gates
January 28, 2016 1:32 am

Gates,
My reply on 1/27 @12:24 above was concise and to the point. I’m not adding anything to it here, but I note that as usual you parse everything interminably.
Give it up. You lost. If you keep arguing it’s just because martyrs will die to be right.

Bob Boder
Reply to  Brandon Gates
January 28, 2016 5:15 am

Brandon
Real world vs BS, you are the great misdirector but you are just a bunch of BS in the end

Reply to  Brandon Gates
January 28, 2016 5:56 am

Gates is at odds with everyone because he’s an oddball. I really don’t understand why he posts his endless, nitpicking, rambling, endless comments. He convinces no one of anything. It just looks like he’s trying to convince himself.

richardscourtney
Reply to  Brandon Gates
January 28, 2016 6:13 am

Brandon Gates:
You wrote saying to dbstealey

Your premise that my comments were “silly” implies that I at least require an explanation.

You had already had several explanations upthread including my own.
I cannot imagine why you claim that dbstealey should add to the explanations of why your daft comments were “silly” when you have ignored all the other explanations that detail why your comments were silly.
Richard

Reply to  richardscourtney
January 28, 2016 6:41 am

Thanks, Richard. But I think we’re dealing with a hopeless case of eco-religious true belief.

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 8:53 am

dbstealey,

Gates is at odds with everyone because he’s an oddball.

Circular, and again so very ironic, anti-consensus boi.

I really don’t understand …

Yes, that is quite evident.

… why he posts his endless, nitpicking, rambling, endless comments. He convinces no one of anything. It just looks like he’s trying to convince himself.

You shouldn’t be so hard on yourself, DB. When you’re done with the playground taunting and have an actual argument to make, do let me know.
richardscourtney,

You had already had several explanations upthread including my own.

I read it. I thought it included a couple of decent points, the best one one of which (confounding factors) I explicitly acknowledge in my own posts.
However, along those lines, this was not one of your better statements: And the real situation is worse than the described random case because it is not reasonable to assume that temperature has so large an effect that it overwhelms all the other variables which affect growth of tree rings.
It’s not at all necessary to assume that temperature is the dominant effect on ring width for the proxy to be useful. It would be nice, as such a case would benefit the signal to noise ratio. As it is, the strongest conclusion I would make is that temperature being among the weaker correlates increases uncertainty of the resulting temperature reconstruction. This is essentially what Schofield et al. argue by appealing to experiment instead my rather limited perspective as a layperson:
They found that competing models fit the Scots Pine data equally well but still led to substantially different predictions of historical temperature due to the differing assumptions underlying each model.
While the periods of relatively warmer and cooler temperatures were robust between models, the magnitude of the resulting temperatures was highly dependent on the model being used.
This suggests that there is less certainty than implied by a reconstruction developed using any one set of assumptions.

THAT is a compelling argument against the expressed certainty of prior dendro studies, and I AM listening to it.
You on the other hand, end on a howler …
Simply, the fact that dendrochronology works is clear evidence that dendrothermology does not work.
… which is, like, the mother of all non sequiturs. It’s logically possible that dendroclimatology/dendrothermology is so uncertain as to be practically useless — here we have a paper making a good case for such a possiblility — but your above statement simply doesn’t follow from any of your stated premises. Since your final conclusion is fallacious, your entire argument must be wrong.
See, I can “prove” by non sequitur too. How well do you commend my effort?

Reply to  Brandon Gates
January 28, 2016 12:13 pm

Gates quotes me:
Simply, the fact that dendrochronology works is clear evidence that dendrothermology does not work.
In bold, too. The problem is, I never said that.
Gates also uses this old chestnut to try and keep his treemometer argument on life support::
Alcohol and mercury have different thermal expansion coefficients, yet both have been successfully used in thermometers.
So real thermometers are the same as treemometers? That’s the silliness I was referring to when I got Gates all spun up.

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 12:38 pm

dbstealey,

In bold, too. The problem is, I never said that.

I know, but Richard did, emphasis in the original. In the future, I’ll not double-up replies in a single post so as to avoid your somewhat understandable cornfusion.

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 1:09 pm

dbstealey,

So real thermometers are the same as treemometers?

Don’t be silly.

That’s the silliness I was referring to when I got Gates all spun up.

Well I’m happy that you enjoy giving me a good chuckle. Let’s try word substitution:
dbstealey (original): Tree rings typically vary from one side of a tree to the other. Which side is the thermometer?
dbstealey (modified): Thermal expansion typically varies from one material to the another. Which one is the thermometer?
dbstealey (caricatured): If we evolved from monkeys, why are there still monkeys?
Transparent disingenuity isn’t your best color, I’m afraid.

Robert B
Reply to  Brandon Gates
January 29, 2016 12:26 am

Alcohol and mercury have different thermal expansion coefficients, yet both have been successfully used in thermometers.

My bottle of Courvoisier was a poor thermometer because of other factors.

Brandon Gates
Reply to  Brandon Gates
January 29, 2016 5:09 pm

I’ll drink to that.
[hic]

richardscourtney
Reply to  Brandon Gates
January 31, 2016 5:45 am

Brandon Gates:
Yes, in an above post I explained that

The fact that dendrochronolgy is demonstrated to work as a local dating method is clear evidence that local variables do provide temporary alterations to growth of tree rings which overwhelm effect of temperature.
Simply, the fact that dendrochronology works is clear evidence that dendrothermology does not work.

In common with all my posts, I provided those facts and the accompanying explanation in good faith. Your psychological projection is offensive: your posts not being in good faith does NOT mean others behave as you do.
And my explanation was clear. Its conclusion is the ONLY possible logical deduction.
A statement is NOT a non sequitur whenever you cannot copy kit from SkS.
Richard

Brandon Gates
Reply to  Brandon Gates
February 1, 2016 1:42 pm

richardscourtney,

And my explanation was clear. Its conclusion is the ONLY possible logical deduction.

No it isn’t the only possible logical deduction, but is a logical possibility. Not one to be discounted, to be sure; however multivariate statistical inference does not strictly require a signal to be the most powerful for it to be predictive. It should go without saying that the stronger a signal relative to others, the less uncertain its predictive ability. It IS necessary to be able to reasonably control for confounding factors in any multivariate analysis, especially when the desired variable has among the weakest signals of the others.
Your best avenue of attack here is on how well those confounding factors have been controlled for, not that they exist.

A statement is NOT a non sequitur whenever you cannot copy kit from SkS.

Red herrings and ad hominem arguments are another form of non sequitur fallacy. If you want to make this about people, not arguments, it behoves you to not establish such patterns.

Tom in Florida
January 27, 2016 1:36 pm

If I recall correctly, and I may not as I am getting older, wasn’t it Briffa who originally wasn’t comfortable in the way they were using his data and stated as much. Wasn’t he ignored by Trenberth and Mann so they could go about with their AGW “proofs”?

MarkW
Reply to  Tom in Florida
January 27, 2016 2:45 pm

That’s my recollection as well.

Chris Hanley
January 27, 2016 1:36 pm

Paleoclimate temperature proxy screening for thermometer correlation (such as it is) should be done before deciding to use it, not after.
If say 90% of many random samples of a likely proxy show correlation then the proxy may be a valid, maybe.
Temperature proxy reconstructions purporting to show greater precision (not accuracy) than Lamb is false precision IMHO.comment image?w=500

Chris Hanley
Reply to  Chris Hanley
January 27, 2016 1:56 pm

Forgive the mangled syntax.

Brandon Gates
Reply to  Chris Hanley
January 27, 2016 2:13 pm

Chris Hanley,

Paleoclimate temperature proxy screening for thermometer correlation (such as it is) should be done before deciding to use it, not after.

I agree with that.

Temperature proxy reconstructions purporting to show greater precision (not accuracy) than Lamb is false precision IMHO.

I don’t agree with that, and I don’t think you would either if you consistently applied that rule to all physical sciences. Surely you would not argue that we should only accept the precision AND accuracy for the gravitational constant implied by Cavendish’s experiments with an apparatus that is relatively crude compared to modern estimates using other more precise and accurate methods and equipment? Would you prefer the Copernicus model for planetary motion to Kepler? For high gravity/high velocity orbital situations, would you prefer Netwon’s laws of motion to Einstein’s general relativity?

Forgive the mangled syntax.

Was readable for me. Forgive me if I have misunderstood.

ferd berple
Reply to  Brandon Gates
January 27, 2016 2:54 pm

Precision: When it predicts yes, how often is it correct?

Bill Partin
Reply to  Brandon Gates
January 28, 2016 2:39 am

Gates has admitted up thread that he comments here just create havoc and disrupt the discussion. Why don’t you all just ignore him?

Reply to  Brandon Gates
January 28, 2016 5:10 am

I’ve heard this argument about the Newtonian v Relativity before and it is starting to annoy me.
Firstly, Newtonian gravitation works beautifully, specifically because it is wrong! And Newton himself expressed concern about the problem of gravity acting instantaneously across infinite distance. Relativity of course, requires gravity to propagate no faster than light.
Even for orbital calculations involving the Earth and Sun this issue is real and intractable. The Sun is 8 minutes away at the speed of gravity but Newtonian orbital calculations that do take the lag into account decay and become unstable. The Einsteinian calculations involving the warping of space/time are complex but work out to a reasonably close approximation of the answer obtained by assuming instantaneous action at a distance. And it is not because it is a small error, it is because large factors almost cancel out in the relativity calculations.
The point is that Newtonian gravitation is a better tool for calculating the real than it is for modelling it. And this distinction, fine though it may be, is very important!

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 12:09 pm

Bill Partin,

Gates has admitted up thread that he comments here just create havoc and disrupt the discussion.

That’s some pretty creative reading. You wouldn’t perchance be willing to provide an actual quote?

Why don’t you all just ignore him?

Depends on whom you ask, prevailing theme is that my “disinformation” needs to be rebutted so that nobody is “fooled” by it. Sort of an insult to other people’s intelligence if you ask me, but then again I’m not exactly above having a similar attitude. In fact, I rather expect Stealey to wander by at any moment and tell me I’m projecting again. It’s so funny when he does that.

Reply to  Brandon Gates
January 28, 2016 12:19 pm

You’re projecting again. ¯\_(ツ)_/¯

Brandon Gates
Reply to  Brandon Gates
January 28, 2016 12:10 pm

Scott Wilmot Bennett,

I’ve heard this argument about the Newtonian v Relativity before and it is starting to annoy me.

Par for this course. Having used that analogy recently on a different thread I thought to leave it out and let the other ones stand, but it’s a personal fav.

Firstly, Newtonian gravitation works beautifully, specifically because it is wrong!

Imagine Michael Mann saying that about treemometers.

And Newton himself expressed concern about the problem of gravity acting instantaneously across infinite distance. Relativity of course, requires gravity to propagate no faster than light.

Newton was highly intelligent. Had he and Einstein come in reverse historical order, I don’t have much doubt each would have filled the others’ shoes quite admirably.

Even for orbital calculations involving the Earth and Sun this issue is real and intractable.

If you say so. Every analogy does have its breaking point. You’ve quite whistled past it already, but are now entering territory which suggests even fundamentally broken models can still be useful. Not really sure you want to go there.

The Sun is 8 minutes away at the speed of gravity but Newtonian orbital calculations that do take the lag into account decay and become unstable. The Einsteinian calculations involving the warping of space/time are complex but work out to a reasonably close approximation of the answer obtained by assuming instantaneous action at a distance. And it is not because it is a small error, it is because large factors almost cancel out in the relativity calculations.

At risk of going completely OT, this actually interests me and I don’t mind exploring it since it’s quite likely I don’t properly understand some key limitation of general relativity. It would help me to start very simply, so: suppose a perfectly spherical cow with the mass of the Sun, homogeneous density, no rotation about any axis, and a NASA weather satellite on a non-intersecting trajectory. Gravity is the only relevant force between these two bodies.
I say Netwon fails more as the sat gets deeper into the gravity well because his formulae don’t properly account for space/time distortion. I also say that Einstein doesn’t fail because the scenario I propose implies a constant gravitational field, and we therefore don’t have to account for any lag time in the propagation of gravity.
I also don’t think in cases where we do have to account for lag time — say, a gravity “slingshot” scenario involving a massive central object, much smaller planets and teeny space probe — necessarily confounds general relativity either, but Newton would be in even worse shape than the first scenario.
Agree/disagree with either/both/none?

The point is that Newtonian gravitation is a better tool for calculating the real than it is for modelling it. And this distinction, fine though it may be, is very important!

My analogy is not meant to knock Newton. Einstein did once allegedly sort of say that all models should be simple as possible, but no simpler. When the simpler model works better for a particular application, I don’t think Einstein would have a problem with it. My understanding is that NASA uses Newton for LEO, Einstein for interplanetary.
There might be a climate modelling moral to that argument as well, but I could also be straining analogies past breaking.

Editor
January 27, 2016 2:07 pm

The interesting statement about this paper is that it freely (and correctly) admits that a best estimate of historical temperature has a (almost) 4°C spread — a FOUR degree Celsius range — useless for comparing modern temps to past temps, s it is three times the size of the total estimated difference between the coolest temps of the Little Ice Age and the warmest temps claimed for 2015.

Matt Skaggs
January 27, 2016 2:49 pm

Lots of folks here are making the general claim that tree rings are not good proxies for temperature. Here is one inescapable conclusion: these folks have never looked at any of the better examples of trees tracking temperature, and most likely have never looked at any of the pertinent data at all.

RACookPE1978
Editor
Reply to  Matt Skaggs
January 27, 2016 3:01 pm

Matt Skaggs

Here is one inescapable conclusion: these folks have never looked at any of the better examples of trees tracking temperature, and most likely have never looked at any of the pertinent data at all.

An accusation. Not accurate nor complete, but it is an accusation.
How ’bout you show the ONE Yamal tree core sample chosen from that entire forested peninsula, and duplicate the analysis for us in toto to show us exactly how that one tree ring can be considered accurate and complete.
Duplication, after all, is the key to “scientific” thought and proof. See,
Here is one inescapable conclusion: EVERY ONE of these folks have never has looked in detail at ALL looked at any of the better examples of trees tracking temperature proxies worldwide from the year 900 through 2100, and most likely have never looked at any of the pertinent data at all have found NO correlation nor collaboration with ANY tree ring thermometers at all.

randy
Reply to  Matt Skaggs
January 27, 2016 4:17 pm

I would also say similar, that those that think they are accurate thermometers probably never grew that many trees. Certainly not in a marginal area. Where I live an early frost will stunt growth overall for the year regardless of overall temps. Actually will be hindered the next year as well. Could be a wet year overall but without good water in early spring your trees wont grow well that year. Did a population of nitrogen fixing plants move into the area for part of the trees life? what about soil biota? I started playing with this variable recently in my work, can make a huge difference. Even if we have trees in line with temps over a given area, this simply tells me no other factors over road temps in that particular area and period. How can I say that with assurance? reread the above. I can list a whole range of other variables that also would greatly affect growth having zero to do with overall temps. Surely you dont suggest they can tell which variables caused good or poor growth years based on a single variable, tree ring width. It isnt magic it is statistical analysis.

Robert B
Reply to  randy
January 29, 2016 12:33 am

Most of the folks here appreciate just how much the growth of a tree is dependent on other things apart from temperature, but thanks, to you and a few others, for the extra knowledge on how bad a proxy it is.

Marcus
January 27, 2016 3:25 pm

But wait !! The Earth is flat !! sarc…It is truly sad how many people listen to this fool !
http://www.theguardian.com/music/2016/jan/26/flat-earth-rapper-bob-neil-degrasse-tyson-diss-track
Now we know where Obama voters come from !! Sigh !

John Whitman
January 27, 2016 3:56 pm

Naomi Oreskes (currently of Harvard and previously of UCSD) must be having a bad karma month. She stridently preaches for climate scientists to be much less conservative and less restrained in their study findings; she want aggressive claims of significant and harmful global warming from fossil fuels She wants them to more be a lot more bold and risk taking in their words and acts about how it is shockingly worse than we previously thought and concluded. She wants ramping up of on the edge activism by previously conservative scientists who found some global warming from fossil fuels. But recently a few scientists are doing the opposite of what she preaches.
A few days ago we saw Mann climb down to a very slightly more conservative approach to calculate probabilities of how much warming by anthropogenic fossil fuel burning there has been. He found less probability than the media reported last year. That was very un-Orestes-like behavior for Mann.
Now we have Briffa and his associates finding significant inherent previously unreported statistical uncertainties in models producing paleoclimate studies. Their findings have significant implications that GCMs endorsed by the IPCC suffer the same. That was very un-Oreskes-like.
Remember December 2015 had some bad karma for Oreskes. Publically Oreskes was frantically ranting against Hansen that he was endorsing plentiful energy via nuclear when Oreskes wanted bold and non-conservative advocacy for a society with green renewable to replace existing fossil source which really meant she wanted a lot less energy available for the planet’s sake. Hansen was very un-Orestes-like.
Poor Oreskes, seems like a bad karma domino effect is possibly in progress. She really gives the perception of being a frantically irrational inciter of invalid thinking on the subject science focused on climate.
John

John Robertson
January 27, 2016 4:06 pm

The long Glibbering Climb Down continues.
CAGW/CC/GCD.
Oh sorry, our scientists mislead us.

January 27, 2016 4:10 pm

It used to be chicken or sheep entrails, now it is Dendrochronology

GregK
Reply to  ntesdorf
January 28, 2016 12:59 am

Dendrochronology is OK.
It tells you when.
But it doesn’t tell you hot, cold or wet

Gamecock
January 27, 2016 4:33 pm

‘Most instrumentally recorded climate data are only available for the past 200 years’
There is no such thing as “climate data.” We have weather data.
The growing season in Northern Sweden is very short. Whatever tree rings tell you, it only refers to maybe a quarter of a year each.
Scots pines live 500+ years ?!?!

wayne Job
January 27, 2016 11:52 pm

Being somewhat old as I am and have planted many trees over my life time.I have news for these tree researchers. It has been my experience that trees have good and bad years, totally independent from temperature, it is the rain fall they get, good rain big ring. Idiots all.

Steve McIntyre
January 28, 2016 9:01 am

Having spent a lot of time on these issues, in my opinion, this article totally failed to discuss the most interesting statistical issues in developing tree ring chronologies from measurement data – though not for the reasons that concern commenters above.
Andrew Gelman, one of the coauthors, is a very competent and serious statistician and it is extremely disappointing that he did not recognize that the development of tree ring chronologies is related to mixed effects/random effects – a point on which I’ve commented on many occasions at Climate Audit. Random effects are a technique that is well known to the wider statistical community and Gelman himself and it would have been extremely interesting to see an insightful commentary.
Had Briffa and dendro coauthors understood the problem, then Gelman could undoubtedly have made an more insightful contribution. Definitely an opportunity missed.

Steve McIntyre
January 28, 2016 9:09 am

My above comment relates to chronologies. The main interest in the article is the connection between a chronology and a temperature reconstruction, where uncertainties have traditionally been wildly underestimated by the dendro community, who have typically used standard errors in an overfitted calibration period to estimate uncertainty. Schofield et al sensibly point to the very large differences between equally plausible reconstructions as representative of the true uncertainty. Unsurprisingly, this sort of point has long been made at Climate Audit and it’s gratifying to see its belated recognition by statistical specialists.

basicstats
January 29, 2016 12:24 pm

Liked the thinking in this paper until reaching equation (3). Climate variable apparently assumed independent normal rv’s. Having made the good point that techniques being used do not take adequate account of climate variation, not a terribly good idea then to use a very poor model for climate time series. At the very least, some autocorrelation, surely?

Robert
January 29, 2016 5:15 pm

Come on now! It is better than reading tea leaves. …… I mean isn’t it ?

Brandon Gates
February 10, 2016 6:14 pm

Chic Bowdrie,
I’m tired of scrolling to find the reply button, so this response posted out of sequence.

Net LW loss, my bad. The incoming SW balances with a net gain.

Yes. I thought we’d already agreed on that so your comment confused me. No worries, I confuse LW and SW at times myself.

IR active gases absorb the radiation, but cannot reduce the rate of loss. How could they?

Thermalization to a LW non-emitter like N2 or O2. In the 1D model, when collisions cause a photon to get burped back out of a CO2 (or H2O, CH4 or other “GHG”) it has a 50/50 chance of going up or down. Both mechanism have the effect of reducing radiative loss to space. Think of it this way: what if the entire atmosphere was a “window” like we see in the 8-9 and 10-13 micron bands?comment image

Assuming no change in incoming solar SW, there still must be same rate of LW emitted to space. Think about it. Take away the whole atmosphere. As long as the incoming radiation doesn’t change, the outgoing won’t change.

So funny, you just asked me to think about the same thing I asked you to think about. I have thought about it, and the rough estimate goes like this. With no atmosphere, the full 341 W/m^2 at TOA in the K&T cartoon would hit the surface, 30% would be reflected away due to albedo, leaving 238.7 W/m^2 absorbed SW. It follows then that at (pseudo-)equilibrium, outgoing LW would be the same at the surface. Plugging into Stefan–Boltzmann:

(341.0 W/m^2 * (1-0.3) / 5.670367E-08 W/m^2 K^4)^(1/4) = 254.7 K

Only when there is a net incoming imbalance. And it’s not yet proven that IR gases cause most, if any, imbalance.

Yes I get it that none of the evidence I’ve presented — or that you’ve found on your own — meets your personal standard of “proof” ….

It could be all solar insolation.

… so, ok, “prove” it to me like you want me to substantiate my own case for you.

The presence of an atmosphere and its composition does warm the surface to some degree compared to no atmosphere, but the equilibrium rate of energy loss is solely dependent on the incoming solar.

Yes, at TOA and at steady state equilibrium, outgoing LW should exactly match incoming SW. Nobody whom I think properly understands the Official IPCC (TM) definition of radiative forcing disputes this.

The 0.6 is highly speculative.

I tire of qualifying adjectives. Its stated uncertainty is +/- 0.4 W/m^2.

If you assume the sea surface temperature trend to be twice the vertical average ocean temperature trend which is more realistic and estimate the surface warming trend to something closer to a satellite warming trend which could be ¼ as much, you get both sea and land increasing at same rate.

Show your work?

“Minima and maxima are both increasing. It’s difficult to imagine the mean not going up as well.”
That’s where your diurnal bias comes in. The rest of the day temperatures don’t necessarily rise just because the low does.

Read what I wrote again — they’re BOTH increasing. According to theory, GHG warming should cause minima to rise faster than maxima, so if I take a “mean” from the min/max, I may very well be introducing a cooling, not warming, bias.

I say that a high/low average is usually biased towards warming. You ask why. I haven’t figured that out yet. Maybe you can after seeing some data and answering the question “Since automated sensors report information every hour, how does a simple average of these values compare with the average of the high and low daily temperature?” asked here: http://mesonet.agron.iastate.edu/onsite/features/cat.php?day=2009-07-31

Here’s the plot:
http://mesonet.agron.iastate.edu/onsite/features/2009/07/090731.gif
Isn’t that interesting. I wouldn’t be able to give an answer just staring at the plot. I’m pretty sure someone like Gavin Schmidt would say something like, “yes, that’s why we create monthly climatologies over a 30 year baseline for computing anomalies”.
I do know where to look to test some stuff: http://www.esrl.noaa.gov/gmd/grad/surfrad/
Sub-hourly data for all kinds of fun things, including up/down SW and LW flux, relative humidity, wind speed, barometric pressure, etc., in addition to air temperature. I already have data for 4 stations through most of 2015. You’ve just given me the perfect excuse to dust it off and do some “experiments” I hadn’t already thought of, for which you have my most sincere thanks.

The main point here is that the more data points the better. Of course infinitely many points is only a theoretical limit, the integral of the continuous change in temperature over a 24 hour period.

No argument, Chic, but I protest that your arguments are getting somewhat lopsided again: the satellites don’t even give us a min/max for any given location. At best you get several scans per day over roughly the same regions, especially near the poles but not so much in the tropics. Don’t get me wrong: I think it’s important data to be gathering — but the Gold Standard of temperature estimates? No, I don’t think that’s obviously defensible.

I’m losing patience over the precision vs. accuracy argument.

Me too, and I think it’s because we’re talking past each other.

I thought I already explained error propagation shows that you can bias a trend up while maintaining the same std error of the trend. Said another way, precision determines the std error of the trend, but says nothing about the accuracy of trend. You can get very tight data obtained inaccurately. I don’t know how to make that clearer.

I completely agree with that and was already there.

Of course, if a calibration doesn’t drift, then there’s no drift. It’s the calibration that does drift, or any other drift, that adds bias over time.

Again, I agree. My argument is: IF calibration doesn’t drift, and all we care about is the trend, precision matters more. I’m not disagreeing with you about all else, just making a … well, possibly overly-pendantic … additional point.

UHI is a gradual bias over time. It makes a trend go up with time. It doesn’t matter if it’s the coldest spot on Earth or the warmest, it makes a trend steeper than it would otherwise be.

Exactly. Like I said, the absolute temperature does not matter when what we care most about is the trend — or in this case, non-macro-climatic biases to the trend.

Using anomalies doesn’t justify ignoring drifts that affect trends.

NO kidding! I said nothing about biases not affecting trends, nor anything about ALL biases getting ironed out automagically by anomaly calculations and/or pairwise homogenization algorithms.

Why would you even consider doing a retrospective study?

Well … we already do. I wasn’t talking about that, because I read wrongly …

I was proposing well-controlled future experiments that would determine how accurate and precise surface measurements are relative to satellite measurements.

… your actual argument, for which I apologize. So as far as surface station data go, USCRN is one such effort. I’m sure there are others I don’t have at my fingertips. A Dutch/German scientist of my (blogging circuit) acquaintance — whose name it is somewhat verboten to mention here — blogs about such experiments, as well as surface station homogenization in general. Makes for interesting reading if you’re so inclined.

That provides ability to judge which data sets, if either, are best. Meanwhile, one remains confirmationally biased towards one or the other.

Unfortunately, I don’t share your optimism on that point for the general case, but might very well be true of you. “Why” I don’t think so in general is a short essay (which for me means half a book), and this post is already way too long (even after editing out over 3/4 what I originally wrote). Cheers.

Chic Bowdrie
Reply to  Brandon Gates
February 10, 2016 11:19 pm

Brandon,
Good idea to move the thread down.
I want to summarize the state of our arguments with a view to winding them down. The first half is dealing with understanding the basic atmospheric physics. It has been helpful to me to review for myself and learn where you are coming from. I’d like to continue with that a bit more, although it is off topic for this thread.
The other discussion we’re having concerns the OP on uncertainty which we’ve managed to segue into whether or not surface measurements are more accurate than satellite measurements. Then we got off on the precision vs. accuracy argument. I’m done with the latter, but I’m interested in finishing the other tangent, our discussion of bias.

Thermalization to a LW non-emitter like N2 or O2. In the 1D model, when collisions cause a photon to get burped back out of a CO2 (or H2O, CH4 or other “GHG”) it has a 50/50 chance of going up or down. Both mechanism have the effect of reducing radiative loss to space.

I think we both understand the absorption emission process. But here you’re missing the big picture. Neither mechanism reduces radiative loss to space. Incoming equals outgoing. Either mechanism warms the surface by providing an insulation or resistance compared to no atmosphere. The question remains whether increasing CO2 has any more effect than it already has.

Its stated uncertainty is +/- 0.4 W/m^2.

Assuming that’s a std dev, it is coefficient of variation of 67%. That’s a lot of uncertainty. But considering how relatively few measurements of ocean temperatures are made, I wonder how it comes out that low.

Show your work?

0.0013/4 = 0.00325 K/yr estimate for satellite lower troposphpere warming trend.
0.0017 * 2 = 0.0034 K/yr estimate for sea surface warming trend.
0.00325/0.0034 = 1.0 meaning sea surface and atmosphere could be in near equilibrium warming at the same rate.

Read what I wrote again — they’re BOTH increasing. According to theory, GHG warming should cause minima to rise faster than maxima, so if I take a “mean” from the min/max, I may very well be introducing a cooling, not warming, bias.

If both are increasing, you are right. For now, don’t conflate the reason for the difference with the measurements detecting it. That way we are looking at trends to see what they say. IOW, if the high and low are both truly increasing and the low is truly increasing faster, that is not a bias being introduced by the measurements.

No argument, Chic, but I protest that your arguments are getting somewhat lopsided again: the satellites don’t even give us a min/max for any given location.

As I said, I’m not yet knowledgeable enough to assert that sats are better in spite of your kind efforts to educate me. I’m just trying to sort out how to judge and for that I thank you.

My argument is: IF calibration doesn’t drift, and all we care about is the trend, precision matters more.

At the risk of being equally pedantic and unnecessarily furthering this argument, if we knew there were no drifts, there would be nothing to argue about.

UHI is a gradual bias over time. It makes a trend go up with time. It doesn’t matter if it’s the coldest spot on Earth or the warmest, it makes a trend steeper than it would otherwise be.

Exactly. Like I said, the absolute temperature does not matter when what we care most about is the trend — or in this case, non-macro-climatic biases to the trend.

Something bothers me about that, but let me end with this:
Surface measurements use high-low averages. High-low averages generally result in warmer than true daily temperatures. If the bias introduced by these high-low averages increases with time, then the resulting trends calculated from those daily averages are warmly biased. Satellite methodology may or may not introduce bias into daily temperature readings. If satellite daily temperatures were not biased, then trends calculated from those satellite measurements would be accurate and unbiased. Surface measurement trends greater than satellite measurement trends may or may not be correct depending on whether surface high-low average temperature bias increases with time and whether satellite measurements are equally biased.