Quite a performance yesterday. Steve Milloy is calling it the “Zapruder film” implying it was the day the AGW agenda got shot down. While that might not be a good choice of words, you have to admit they did a fantastic job of shooting down some of the ridiculous claims made by panelists prior to them. While this may not be a Zapruder moment, I’d say that it represented a major turning point.
Give props to both Roger and Roy.
Marc Morano reported:
‘Senate global warming hearing backfires on Democrats’ — Boxer’s Own Experts Contradict Obama! — ‘Skeptics & Roger Pielke Jr. totally dismantled warmism (scientifically, economically, rhetorically) — Climate Depot Round Up
‘Sen. Boxer’s Own Experts Contradict Obama on Climate Change’ — Warmists Asked: ‘Can any witnesses say they agree with Obama’s statement that warming has accelerated during the past 10 years?’ For several seconds, nobody said a word. Sitting just a few rows behind the expert witnesses, I thought I might have heard a few crickets chirping’
Video link and links to PDF of testimonies follow.
Here is the video link, in full HD:
http://www.senate.gov/isvp/?type=live&comm=epw&filename=epw071813
Dr. Spencer writes about his experience here and flips the title back at them:
The PDF’s of each person’s testimony can be accessed by click on their names below:
Panel 1
| Dr. Heidi Cullen
Chief Climatologist Climate Central |
| Mr. Frank Nutter
President Reinsurance Association of America |
| Mr. KC Golden
Policy Director Climate Solutions |
| Ms. Diana Furchtgott-Roth
Senior Fellow Manhattan Institute for Policy Research |
| Dr. Robert P. Murphy
Senior Economist Institute for Energy Research |
Panel 2
| Dr. Jennifer Francis
Research Professor Institute of Marine and Coastal Sciences, Rutgers University |
| Dr. Scott Doney
Director, Ocean and Climate Change Institute Woods Hole Oceanographic Institution |
| Dr. Margaret Leinin
Executive Director, Harbor Branch Oceanographic Institute Florida Atlantic University |
| Dr. Roger Pielke, Jr.
Professor, Center for Science and Technology Policy Research University of Colorado |
| Dr. Roy Spencer
Principal Research Scientist IV University of Alabama, Huntsville |

Leif writes “then goes down again by the same amount, then goes up, then down, up down, up, down”
The same amount? Really? You’re making all sorts of assumptions to support your own beliefs Leif because there is no solid data to support you. Ionisation as a proxy for UV is a joke. You started underplaying the amount of variation and now you make unsupportable statements because you believe the sun itself essentially plays no part in our climate. You might be right but then again you’re clearly biased so your opinion is worth much less on this.
Leif writes “Variations of 1/1000 does translate to a variation of temperature of 1/4000 which out of 288K is 0.07 deg C”
Talk in W/m2 Leif. There is no point in talking temperature because we simply dont know what the feedbacks will be.
TimTheToolMan says:
July 22, 2013 at 7:24 pm
The same amount? Really?
Yes, really.
Ionisation as a proxy for UV is a joke.
UV creates and maintains the ionosphere. This is a well-understood subject [has been understod for more than a 100 years].
you believe the sun itself essentially plays no part in our climate.
The sun creates a solar cycle effect of somewhat less then 0.1C.
TimTheToolMan says:
July 22, 2013 at 7:28 pm
“Variations of 1/1000 does translate to a variation of temperature of 1/4000 which out of 288K is 0.07 deg C”
Talk in W/m2 Leif.
Here is to translate that number to W/m2: multiply by 1361 W/m2.
There is no point in talking temperature because we simply dont know what the feedbacks will be.
The theoretically expected temperature change matches what most researchers indeed find for the solar cycle effect within about a factor of 2. Two times tiny is still tiny.
Leif writes “UV creates and maintains the ionosphere. This is a well-understood subject [has been understod for more than a 100 years].”
And yet its only very recently known that UV varies as much as it does and only due to careful precise measurements by a purpose built satellite. So the ionosphere is a poor proxy for UV.
And then goes on to say “Here is to translate that number to W/m2: multiply by 1361 W/m2.”
Dealing with theoretically derived values is a poor way to look at the data. Deal with the measured values.
TimTheToolMan says:
July 22, 2013 at 10:45 pm
And yet its only very recently known that UV varies as much as it does and only due to careful precise measurements by a purpose built satellite.
No, it is not ‘known’. There is strong doubt about the calibration of the UV measurements. Different satellites give inconsistent result.
So the ionosphere is a poor proxy for UV.
The near-UV [where most of the energy is] is well determined and there is no uncertainty in the role of UV in the ionopshere.
Dealing with theoretically derived values is a poor way to look at the data. Deal with the measured values.
The 1361 W/m2 is very well measured by SORCE.
I am not a credible judge of which opinion is correct. If anyone feels that they are, please educate me.
Sure. Mean and standard deviation apply to ensembles of “independent and identically distributed” (iid) samples. The meaning of both is derived from the Central Limit Theorem, which states basically that the distribution of sample means of iid sample sets will tend towards a normal (the so called “bell-curve”) with a variance or standard deviation that systematically scales with the number of samples so that more samples narrows the width of the normal distribution, resulting in the sample mean becoming a systematically better estimator of the true mean. One can compute the probability of the true mean lying within some distance of the sample mean by using the integral of the probability distribution function (the cumulative distribution function or CDF of the normal) a.k.a. the error function (so named because it is used almost universally to compute probable error.
Note well the axiomatic conditions for mean and standard deviation to have meaning: The samples have to be drawn from the same distribution — one doesn’t do well predicting the outcome of rolling dice by mixing data from six sided dice with two sided coins or a Gaussian process. That’s the “identically distributed” part. Second, the samples have to be independent. That has a very specific meaning in statistics — it means that they have to be generated by what amounts to a random process. If there is correlation — the opposite of independence — in the process that generates the samples, then two kinds of bias creep in. The first is that the expected scaling of the standard deviation will be incorrect. Suppose you simply double count each sample (that is, you introduce pairs of completely correlated samples) — you will think you have 100 iid samples and you will really have only 50, so when you compute the unbiased standard deviation from the samples you will underestimate the standard deviation by a factor of 1.4 (square root of two). Because the scaling of the error function, this will make an enormous difference in your estimates of probable error in the sample mean. Many processes — Markov chains in particular — tend to produce samples with a rather huge amount of correlation.
This makes it rather difficult to come of with a valid estimate of error for a SINGLE GCM — they accomplish it by running a Monte Carlo random variation of the inputs and presume (probably incorrectly) that there is an ergodicity theorem similar to the ones one can prove in ordinary statistical mechanics at work. I say probably incorrectly because the underlying models are highly nonlinear and almost certainly exhibit serious broken ergodicity, so they are basically studying one highly linearized branch, setting themselves up for Black Swans where the entire system spontaneously reorganizes into a new mode that is not in any sense a linearization or perturbation of the one they analyze with MC. But at least this is a nominally valid use of statistics (that should be applied with extreme caution as far as expected error is concerned).
The second problem is even more pernicious. If the model distribution is not the same as the distribution of the physical system being modeled, then even a fully converged, non-internally correlated solution will not converge to the correct value. And of course it is not possible a priori to know that a model is correct. Hence one formulates the null hypothesis — if one has the perfectly correct model and runs your ensemble of samples from it and gets a mean and standard deviation, what is the probability of getting the actual result observed in the physical system? If this probability is very low one usually rejects the null hypothesis that the model is correct, one doesn’t assert that reality is doing something very unusual. The simple fact of the matter is that reality tends to do the usual, not the unusual, so if there is a discrepancy between model/process and reality, it is usually the model that is at fault.
Now consider the application of this straight up out of stats books theory to averaging over many models. First, show me the hat containing an ensemble of GCMs, that is to say, a probability distribution of GCMs. I would assert rather vehemently that there is no such hat, and that GCMs in general cannot ever be considered to be iid random samples pulled from such a hat. Second, show me — hell, even argue persuasively — for the proposition that the differences between GCMs constitute sampling a random process. I would assert instead that the samples are going to be horrendously correlated, not just in core science but in the so called irrelevant detail and in the computational methodologies and worst of all, in the underlying assumptions. Indeed, many of the GCMs share a common parentage (one could argue that all the GCMs share a common parentage, at least conceptually) and I’m quite certain that shared systematic errors cause dread correlation between GCM results, resulting in erroneous assertions of standard deviation. Third, the models are not validated against reality, so we cannot be certain that the model mean has anything whatsoever to do with reality. You can average ten thousand incorrect models and get a very, very precise incorrect model mean and a tiny standard deviation and it won’t mean a damn thing other than “You should have rejected the null hypothesis that these models are correct long ago” when reality falls ever farther outside of the band of probable states predicted by the models.
So as I said, this assertion is absolute, indefensible nonsense. It might be nonsense you could get away with if the mean over models was working well to predict the present climate in all ways (although that still would not make it correct or a defensible use of statistics). It is criminal abuse of statistics to use it to assert “certain” knowledge — or even systematically probable knowledge — in a venue where the sampling process overtly violates the most basic precepts of statistics, sampling theory, and predictive modeling. You try doing this sort of crap in the real world of predictive modeling of e.g. the stock market, consumer behavior, or almost anything worth actual money and you’d be out of business in a year wondering what happened to you and why customers won’t pay you for models — no matter how magnificently they predict the past — that fail to predict the future.
rgb
I forgot to mention the third requirement of internal correlation. If the sampling process is not independent, it mandates the use of Bayesian analysis, not simple computation of mean and standard deviation. And at that moment the error estimates go through the roof, because one has to assess objective priors probabilities for all of the assumptions. This too is what the Monte Carlo is trying to manage without doing the actual Bayes Theorem computation (with its many unknown priors and joint/conditional probability distributions). This cannot be done with “Monte Carlo” over GCMs as if they are samples drawn from a hat.
Finally, as I pointed out in both this and other venues. there is direct evidence that the individual GCMs are not, in fact, sampling the same distribution in any sense whatsoever. Four GCMs were recently applied to a very simple toy planet with the simplest of structure, and converged to four completely distinct results. Not similar ones, COMPLETELY DIFFERENT ones. This is the “last straw” for GCMs — they are dead to the world, only they don’t know it yet. Until GCMs can satisfy the SIMPLEST of validations — converging to the same answer for a toy problem — why should we believe their answer for our very, very complex real planet?
rgb
Gail,
Thanks. This is what I was trying to get at in saying earlier I’m not interested in fighting a PR war, and that it’d be a bad idea to do so. Maybe ‘PR’ is the wrong word; maybe there is a way to do PR that doesn’t involve manipulation and deceit.
I don’t care what policy is ultimately best, these are evil means. No way.
Leif writes “No, it is not ‘known’. There is strong doubt about the calibration of the UV measurements. Different satellites give inconsistent result.”
There is always doubt about measurements that questions long held beliefs. But those long held beliefs are based on poor proxies such as the ionosphere. It is quite obvious that the changes in the ionosphere cant resolve a 0.4W change in UV and you should know better than trying to defend that argument.
Its still possible the measurements are faulty but at the moment they are the very best results we have so let them be proven wrong before discounting them.
Leif then writes “The 1361 W/m2 is very well measured by SORCE.”
And what has that got to do with quoting derived temperatures vs measured energy flux?
TimTheToolMan says:
July 23, 2013 at 6:55 am
But those long held beliefs are based on poor proxies such as the ionosphere. It is quite obvious that the changes in the ionosphere cant resolve a 0.4W change in UV and you should know better than trying to defend that argument.
If one knows what one is doing it is quite obvious that the ionosphere reacts to the solar cycle changes of UV, see e.g. Slide 9 of http://www.leif.org/research/Rudolf%20Wolf%20Was%20Right.pdf
best results we have so let them be proven wrong before discounting them
My point is that even if they are correct, the impact on the climate is minute, of the order of less than 0.1C. See Slode 3 of http://www.leif.org/research/The%20long-term%20variation%20of%20solar%20activity.pdf
Leif writes “If one knows what one is doing it is quite obvious that the ionosphere reacts to the solar cycle changes of UV”
But thats not what I said is it.
Leif writes “My point is that even if they are correct, the impact on the climate is minute, of the order of less than 0.1C.”
There you go with your derived temperature value again. And assumptions about what the variation in UV can and cant do based on what you believe to be correct rather than what solid data tells you.
TimTheToolMan says:
July 23, 2013 at 4:34 pm
But thats not what I said is it.
I made your language a bit more amenable to scientific discussion. What you did say was “Ionisation as a proxy for UV is a joke”.
There you go with your derived temperature value again. And assumptions about what the variation in UV can and cant do based on what you believe to be correct rather than what solid data tells you.People looking for solar cycle effects usually find a measured temperature variation of the order of 0.1C [within a factor of two]. Now, one can, of course, deny that the measured values are ‘solid data’.
RGB
I always read your comments with interest…in my experience most problems with the application of statistical analysis arise from neglecting concepts like independence, and incorrect assumptions regarding the underlying distributions of the data being sampled. thanks for reminding us again of their importance.
it’s always a blast to read you..your students are lucky.
Leif writes “People looking for solar cycle effects usually find a measured temperature variation of the order of 0.1C ”
And how do they do that Leif? How is the variation due to the solar cycle extracted from the rest of the “natural” variation apart from massive assumptions?
TimTheToolMan says:
July 23, 2013 at 7:37 pm
“People looking for solar cycle effects usually find a measured temperature variation of the order of 0.1C ” And how do they do that Leif? How is the variation due to the solar cycle extracted from the rest of the “natural” variation apart from massive assumptions?
Perhaps you should educate yourself about how to do this. One way is called ‘superposed epoch analysis”: You detrend the measured temperatures to remove variations on time scales longer than the solar cycle, then you take 11 years of [yearly] measurements for the fist solar cycle where you have good data and write the numbers in a row. Then you write another row of 11 yearly temperatures for the next solar cycle just below the first row. Then you do that for the next cycle, etc, until you have a row for every cycle. You then calculate the mean of the values of all rows for the first ‘column’ of data, i.e. for the first year of each cycle, then for the next column [i.e. the next year], and so on until your have a row of 11 averages. This beats down the accidental errors but leaves the solar signal [if it is there] undisturbed. Finally you see how much the average temperatures varied from solar minimum [the first and last minimum in the row of averages] to solar maximum [somewhere in the middle of the row of averages].
This is one [simple] way. There are other sophisticated statistical methods from analysis of signals, but the simple method is enough.
Now, if you don’t see a signal emerging from the’natural’ variation it just means that the influence of the Solar cycle is too small to be detected, which in turn means that the Sun is not a major driver of climate. Easy enough to understand, right?
Leif writes “You detrend the measured temperatures to remove variations on time scales longer than the solar cycle”
And there you have the biggest assumption of the analysis. Right there you assume there is no long term solar forcing by say…changes in UV from cycle to cycle.
Furthermore we just dont have enough solid data to be doing that for any more than a few cycles and so you’re going to be left with a lot of noise anyway.
TimTheToolMan says:
July 24, 2013 at 3:58 am
And there you have the biggest assumption of the analysis. Right there you assume there is no long term solar forcing by say…changes in UV from cycle to cycle.
We were discussing the effect of the solar cycle. And as you point out we have no evidence that UV changes from cycle to cycle, except following the cycles’ ups and downs.
so you’re going to be left with a lot of noise anyway.
If you can’t see the signal for the noise, it shows that the sun is not a major driver. Good that you agree. Or do you persist in the assumption that there is a strong effect, we just can’t see it?
TimTheToolMan says:
July 24, 2013 at 3:58 am
Right there you assume there is no long term solar forcing by say…changes in UV from cycle to cycle.
Now, your BIG assumption that there is long-term solar forcing does underlie some of the reconstructions of TSI, namely that variations of TSI can be described as the the sum of straight solar cycle forcing and a long-term trend given by a running mean of the sunspot number. You can see that idea in action of Slide 18 of http://www.leif.org/research/Solar-Petaluma–How%20Well%20Do%20We%20Know%20the%20SSN.pdf and also see how it failed.
Leif writes “variations of TSI can be described as the the sum of straight solar cycle forcing and a long-term trend given by a running mean of the sunspot number.”
No Leif, TSI is irrelevant. I dont have any assumption about what IS happening. My assumptions are about what could be happening and what the latest data shows did happen. You cant discount the possibility that TSI has remained fairly constant while UV has slowly changed over time by a few tenths of a Watt (and Visible also changed in the other direction)
You cant do it. You can guess…but, Leif, its a guess.
TimTheToolMan says:
July 24, 2013 at 9:01 pm
You cant discount the possibility that TSI has remained fairly constant while UV has slowly changed over time by a few tenths of a Watt (and Visible also changed in the other direction)
As I have pointed out several times, people have looked into that [remote] possibility, see Slide 3 of http://www.leif.org/research/The%20long-term%20variation%20of%20solar%20activity.pdf
And found that the effect on temperature is vanishingly small. And that therefore even under your massive assumptions the Sun is still not a major driver.
Leif writes “As I have pointed out several times, people have looked into that [remote] possibility”
And I think I keep mentioning that is not data, Its a guess. So we are yet again at an impasse. You are certain about your guess and I am certain it IS a guess.
TimTheToolMan says:
July 25, 2013 at 5:59 am
And I think I keep mentioning that is not data, Its a guess. So we are yet again at an impasse. You are certain about your guess and I am certain it IS a guess.
Science is always guesses well-founded on sound physical principles. What we seem to agree on is that whatever the story is, the influence of the Sun is of such small amplitude that it almost drowns in the noise. This is the important point. Neither your guess nor mine contradict that.