Quite a performance yesterday. Steve Milloy is calling it the “Zapruder film” implying it was the day the AGW agenda got shot down. While that might not be a good choice of words, you have to admit they did a fantastic job of shooting down some of the ridiculous claims made by panelists prior to them. While this may not be a Zapruder moment, I’d say that it represented a major turning point.
Give props to both Roger and Roy.
Marc Morano reported:
‘Senate global warming hearing backfires on Democrats’ — Boxer’s Own Experts Contradict Obama! — ‘Skeptics & Roger Pielke Jr. totally dismantled warmism (scientifically, economically, rhetorically) — Climate Depot Round Up
‘Sen. Boxer’s Own Experts Contradict Obama on Climate Change’ — Warmists Asked: ‘Can any witnesses say they agree with Obama’s statement that warming has accelerated during the past 10 years?’ For several seconds, nobody said a word. Sitting just a few rows behind the expert witnesses, I thought I might have heard a few crickets chirping’
Video link and links to PDF of testimonies follow.
Here is the video link, in full HD:
http://www.senate.gov/isvp/?type=live&comm=epw&filename=epw071813
Dr. Spencer writes about his experience here and flips the title back at them:
The PDF’s of each person’s testimony can be accessed by click on their names below:
Panel 1
| Dr. Heidi Cullen
Chief Climatologist Climate Central |
| Mr. Frank Nutter
President Reinsurance Association of America |
| Mr. KC Golden
Policy Director Climate Solutions |
| Ms. Diana Furchtgott-Roth
Senior Fellow Manhattan Institute for Policy Research |
| Dr. Robert P. Murphy
Senior Economist Institute for Energy Research |
Panel 2
| Dr. Jennifer Francis
Research Professor Institute of Marine and Coastal Sciences, Rutgers University |
| Dr. Scott Doney
Director, Ocean and Climate Change Institute Woods Hole Oceanographic Institution |
| Dr. Margaret Leinin
Executive Director, Harbor Branch Oceanographic Institute Florida Atlantic University |
| Dr. Roger Pielke, Jr.
Professor, Center for Science and Technology Policy Research University of Colorado |
| Dr. Roy Spencer
Principal Research Scientist IV University of Alabama, Huntsville |

Next, predicting fumerole steam patterns…
The Earth’s magnetic field has been in decline for many years. The decline is most prominent in the equatorial regions. In particular in the South Atlantic Anomaly (SAA) region, and has been spreading around the planet from there. Overall lower field in the entire equatorial regions and spreads north and south. Over the same time period the amount of radiation for those regions would be slowly and incrementally increasing.
We know that there is a relationship between the SAA and the radiation belts.
We know that there is a relationship between the radiation belts and the Equatorial Electro Jets (EEJ)
We know that there is a relationship between the above and ionization anomalies in the equatorial regions.
If the sun is going into a Maunder type minimum extended lowered period of solar activity..
How large and how low will the radiation belts become?
Leif writes “No, the UV variation will cause TSI to vary between 1361.045 and 1361.105 [using your numbers] assuming that UV was to only things that varied. ”
Leif, you’re missing the point big time. Its not UV as a proportion of the TSI change that matters, its UV as a proportion of TSI total. This is because UV is to a large extent caught in the upper atmosphere and doesn’t contribute to the surface warming directly. So if the TSI is made up of more UV and less Visible then there is less surface warming. And the amount of UV is much larger than the proportion of UV in the TSI change.
The change in TSI is largely irrelevant to this argument.
Ooops yes I did watch the Senate committee on climate change. eeek
These guys wearing blinders or what. They seem to have narrow points of view.
Like if you build neighborhoods on a swamp and it floods, whattttttt….Or better yet build on cliffs with an ocean view…..Maybe there should be more pre planning for some of this…
TimTheToolMan says:
July 21, 2013 at 8:48 pm
Its not UV as a proportion of the TSI change that matters, its UV as a proportion of TSI total.
UV makes up only a few percent of TSI. The solar cycle variation of UV is only a few percent of the UV. A few percent of a few percent is a few parts in ten thousand. So that is the change we are talking about. Slide 3 of http://www.leif.org/research/The%20long-term%20variation%20of%20solar%20activity.pdf shows the effect of increasing UV and decreasing Visible [the keep TSI constant]. As you can see in the lower panels the effect on temperature is very small [less than 0.1C].
E. Swanson … ignore UAH. Use RSS if you don’t like Spencer. The answer is the same. Or soundings. He uses six data sets, only two of which are satellite, in the LTT graph. There is excellent agreement between UAH, RSS, and soundings. This is what is called “validation” — two completely different methodologies, consistent results.
.
While we’re on the topic, what exactly validates the GCMs? They don’t agree with each other. They don’t agree within themselves — Monte Carlo perturbation into an ensemble of runs produces not a line but a thick braid of future possibility, one that constantly oscillates up and down (with the wrong fourier/spectral decomposition as far as the timescales of the actual climate are concerned). When four GCMs were recently compared on a toy greybody water planet with absolutely no interesting features, they all four converged (so to speak) to completely different temperatures, convection patterns, and climates (and while we’re on the subject of the subject, why is this study only being done now, after humanity has invested order of half of a trillion dollars, precipitated an economic collapse in Europe, perpetuated the millions of annual deaths caused by energy poverty all over the world).
You cannot validate a model on the training set, yet that is precisely what has been done for GCMs. Worse, we only get to see that validation in averages over the GCMs themselves with an “error bar” dictated by the range of GCM predictions, which is an absolute travesty — an complete misapplication of the methodology of statistics.
It is easy to build a quantitative model of the stock market that uses simple linearization — expansions and fits to “anomalies” or “market changes” on the basis of state and a few simple market drivers — that will work remarkably well over the training set, and not infrequently will work in the immediate past to hindcast and for some interval into the immediate future to forecast — until the day comes that it fails. And when it fails, it fails completely, changing more in months, weeks, or sometimes a single day than all of the carefully cumulated value of all of the anomaly changes in the “normal” mode that was so cleverly constructed and fit. This is known as a “Black Swan Event”, and if the GCMs worked perfectly we would need to beware such unpredictable nonlinear changes if only because the Earth’s natural climate history is one of perpetual, often “catastrophic” change, none of which can be hindcast or even understood on the basis of existing GCMs.
Not one of the GCMs can even semi-qualitatively explain the proxy-derived climate variation over the last 2000 years, let alone over the Holocene, let alone over the Pliestocene (including the Wisconsin glaciation, its end, and the Younger Dryas). Climate scientists think that the science is settled because their linearized models have successfully (mis)fit what is very probably natural noise over a narrow time span. Worse, a time span where the climate has been nearly monotonically warming anyway, so that a ten year old armed with a ruler and a pencil could build a “warming model” that would come out approximately correct — until it failed.
We are observing just such a failure right now, and that is what Spencer is pointing out. By presenting the actual spaghetti produced by the GCMs, his graph at one time demonstrates that they do not agree — they do not even approximately agree. Further, their spread of disagreement around a mean behavior — incorrectly and incompetently used as a reliable statistical predictor of probability in AR4’s Summary for Policy Makers without excuse or explanation — indicates that nature is at the very bottom of the range of the entire pack, and the same ten year old armed with a ruler can see that unless the trend changes (which it could, of course, at any time) it will completely depart from that range this year or next year at the latest.
Now, if one attempted to actually validate the GCMs one at a time by comparing their internally generated range of predictions post 1990 to reality, how many would survive at the p = 0.05 level? One? Two? Perhaps three? All of them the ones that make the coldest of warming predictions, and they would all still lie above the empirically observed climate. Add to this the simple fact that all of the GCMs get various critical features other than temperature wrong — this one predicts droughts where no droughts occur, this one predicts Antarctic melting where Antarctic ice has been growing, all of them get the LTT wrong. Add to that the recently demonstrated fact that they do not satisfy simple consistency tests — converging to at least similar answers for toy problems (and, I suspect, failing internal consistency checks on those same problems if anyone bothered to try, e.g. doubling the grid resolution on a toy problem and seeing if they get the same results).
So sure, let’s talk about validation. Next we can talk about the “validation” of the methodology used to generate the land surface record, and how likely it is that every new “correction” to e.g. HadCRUT tends to further exaggerate the warming by at first finding plausible new data transformation rules that “warmed” the present, then when that was no longer possible as the divergence between the land surface record and LTT had grown to where it was embarrassing and was on the verge of giving the game away, finding rules that cooled the “past”, at the same time they consistently underestimate the UHI effect at the primary reporting sites, which are often <terribly sited. Let’s talk about the cavalier treatment of error bars in the figures in AR5, in particular the error bars on the surface temperature record compared to the model predictions, which are all miraculously exactly 0.1 C? Excuse me? There is visible and variable spread in the contributing temperature records. Those records themselves sparsely sample a staggeringly large surface area at a tiny set of UHI corrupted locations. They share much of the same data sources (which I will point out means that they are themselves not independent, greatly complicating adding ANY sort of error bar, but at the very least mandating a clear statement on just how the error bar was computed.
The answer, of course, is almost certainly that it was not computed — somebody built the graph and decided to put an error bar on the dots because that’s what one does in “science”, isn’t it?
We are thus sadly left with two choices in the climate modeling world. On the one hand, we have an absolute plethora of GCMs that are supposedly based on physics, but that fail to agree or converge internally, externally, and that are sufficiently diverged from the data in several dimensions that normally one would throw them into the trash can or at the very least try try again to get them right. On the other we have the purely empirical models like Scaffetta’s that leave out the physics and use the moral equivalent of a teenager armed not just with a ruler but with a moog synthesizer to do pure numerology, but numerology that has every bit as good a chance of capturing the near term variation of the climate, until it doesn’t.
In the meantime, the world is pissing away a staggering amount of its wealth trying to prematurely adopt immature technologies that in ten years or twenty years will come to pass anyway not to save the world, but to save (and hence make!) money. None of this money — in the US — is going to develop the one technology that might actually work now to ameliorate the carbon problem — LFTR — because it is nuclear energy. China, OTOH, has mountains of thorium as radioactive waste produced when it mines rare earth metals needed to build the solar cells and high efficiency magnets for generators and batteries and other electronics that they sell us. They are working hard on LFTR because they need energy and they have enough Thorium to provide it at first world levels plus for several thousand years. As does India. As does the United States — in fact, the state of North Carolina alone has enough Thorium to power the US for well over ten thousand years.
To be honest, the question of anthropogenic warming is an open one, but open means that:
a) There is a very good chance that there is at least some warming caused by increasing CO_2 — the physics of this warming is simple enough and very probably correct in general if not precise in specifics.
b) The feedbacks (if any) from this warming are basically unknown. As we accumulate actual reliable climate evidence, we are gradually constraining the boundaries of the feedback, and the total climate sensitivity (direct plus feedback-linked temperature change) is in the moral equivalent of freefall as the high sensitivity models are now failing by such a huge margin that even people who for whatever reason persist in their faith that the models are reliable are rejecting them (or suggesting that the feedback parameter needs to be adjusted in order for the models to describe the last 15 years of little to no warming).
c) The timescales and importance of many natural oscillations in the climate cycle — which have timescales ranging from a few years to at least half a century — is only gradually being revealed in the modern era of semi-reliable satellite observation plus a still horribly inadequate but much better coverage of the oceans that cover 70% of the surface of the Earth and that constitute an entire climate system of its own, directly coupled to the atmospheric climate system and indirectly coupled in countless ways to the surface climate system. At this point it is almost beyond question that the ENSO is perhaps the most important single determinant of whether or not the Earth’s climate is warming, cooling, or remaining neutral, and we cannot predict ENSO because we do not fully understand ENSO. Then there is the PDO, the NAO, etc. All of them are important, because convection can completely dominate radiative trapping per se when it comes to whether or not the Earth heats or cools, because radiation rates in the SB equation scale like
d) The timescales and importance of the ocean is only just beginning to be revealed. Witness the recent “missing heat” debacle. It is alleged that the missing heat (supposedly trapped by the increase in CO_2) has disappeared into the meso-ocean well below the surface layer, as it has not appeared in the SSTs that conceivably set the thermostat for the atmosphere and land. Even if AGW is occurring, the thermal buffering of the ocean make it very possible if not likely that the warming we have seen since the start of the industrial era is mostly non-anthropogenic and that any anthropogenic warming — if it ever exceeds the natural signal by enough to warrant attention — will be lagged by decades to centuries.
e) We cannot directly measure the variation of the radiative trapping caused by variation of CO_2 at the current baseline concentration. Our assertion of the value and functional form there even in the physics involve a complex process of band spreading in interaction with the entire convective system that maintains the DALR. We are not certain, certainly not empirically certain, that we even have the physics of CO_2-based warming quite right.
It might therefore be wise to validate many things in our understanding (or lack thereof) of the climate, especially when we barely understand and can only crudely predict the weather.
And oh, yeah — f) then there is the sun.
rgb
rgb
It seems to me that the star of this particular show was Senator Sessions, ably aided by Roy Spencer (despite Whitehouse’s shameful creationist smear).
Pielke Jnr made some valid points but they drowned in the accompanying pro-IPPC wibble.
In no way was this a win for climate realists. The message (in the scandalously short time devoted to it) was lost because the odds were too heavily stacked against the unemotional empirical data. It got buried beneath more than three hours of alarmist hand-wringing. guilt-tripping, think of the cheeeeldren, give all our money to developing countries, the heat is MIA but probably skulking in the ocean depths somewhere bollocks.
The most shameful moment comes from chairman La Boxer herself. She was the only one to invoke the “D” word. Disgraceful behaviour.
rgbatduke said:
If that ain’t a quote of the year, I’ll eat my hat
“No valid scientist should be a creationist as Spencer is! So anything else he claims should be doubted!”
Interestingly I remember visiting the Kennedy Space Center and there is a long speech given about the moon landings project that refers to “God”. Obviously man never went to the moon, since it is clear that some of the engineers and scientists that were involved in the project were firm believers.
RE: rgbatduke July 22, 2013 at 4:37 am
Your long rant failed to mention the rapid loss in Arctic sea-ice over the past 15 years or so, a period during which the UAH TLT has shown little warming over the Arctic. Perhaps that’s because the TLT data over the Arctic is flawed because the MSU/AMSU instruments are impacted by the decline in sea-ice cover. The emissivity of sea-ice is greater than that of the open water or of the melt ponds which form on top of the sea-ice during summer. The result is that the LTT data, with it’s larger surface influence, would show less warming than that which is actually occurring as proven by the loss of sea-ice.
In Spencer’s presentation, both the written and during the oral, a spaghetti graph of GCM results is shown along with data from the MSU/ AMSU Middle Troposphere set (Figure 2). Spencer claims that this shows that there’s no high altitude warming over the tropics (20N to 20S). Is his graph from a published paper, if so, where is it? Trouble is, as far back as 1992-93, Spencer and Christy have claimed that the MSU measurements include contamination from the Stratosphere, which has exhibited a well known cooling trend. Thus, the TLT (now called the LTT) was created with the claim that the LTT would present a means to remove that stratospheric cooling. Spencer apparently has “forgotten” about his earlier work and now presents the MT data as though it’s the gold standard for assessing climate change. Has Spencer recanted his previous claims that the MT is flawed or has he now become more interested in the adulation of his devoted fans in the denialist camp (perhaps including the Koch Brothers)?
E. Swanson says:
July 22, 2013 at 9:46 am
Do you have any basis for accusing Dr. Spencer of being in the pay of the Koch Brothers? Or is this simply a libelous lie?
Explain please how air temperature, which hasn’t warmed statistically for going on 20 years, can cause Arctic sea ice melt now greater than in the 1990s or 2000s. If air is warmer over the Arctic, then it must be colder over the Antarctic, where sea ice has grown.
Doesn’t it make more sense that natural oceanic oscillations (& storms) account for Arctic sea ice loss, rather than air temperature? Consider that the Arctic also melted to a greater extent the last time the PDO & AMO were in their warm phases.
Ryan Stephenson says:
July 22, 2013 at 8:56 am
Belief in God isn’t the same as belief in creationism, as usually conceived.
Great scientists of the past, such as Newton, have been not just creationists, but young earth creationists. Since the discoveries of the age of the earth, extinction and evolution, among other advances, young earth creationism is no longer scientifically defensible, ie over the past 250 years or more. Darwin’s geology mentor Sedgwick, an Anglican divine, believed in a series of creations. His own field work led him to abandon biblical “flood geology”.
However even today some renowned scientists remain old earth creationists, or at least are willing to entertain the possibility that our universe was created. Dr. Collins of the Human Genome Project is an evangelical:
http://en.wikipedia.org/wiki/Francis_Collins#Christianity
Dr. Spencer’s belief in Intelligent Design creationism is indeed outside the scientific mainstream, but IMO does not discredit his work in atmospheric science & on oceanic contribution to the climate system, nor should it, despite Sen. Whitehouse’s scurrilous behavior. I also don’t think Dr. Spencer is a young earth creationist.
Mr. Brown, the nuclear power industry in the United States has no interest in pursuing thorium reactors, for the simple reason that the thorium fuel cycle has no cost advantages over the current uranium fuel cycle.
In the United States, nuclear power is now in a fight for its life for three reasons: (1) competition from cheap natural gas for baseload power generation; (2) extremely high capital costs for building nuclear generation capacity, in comparison with new gas-fired capacity; and (3) the ongoing mismanagement of nuclear construction projects by middle-level and senior-level managers in the nuclear utility industry.
Over the next decade, it would not be surprising at all if we saw a one-third to one-half reduction in the share of electricity produced by nuclear utilities in the United States, with all of that decline being attributable to the nuclear industry’s inability to manage its costs well enough to remain competitive with natural gas.
Dr. Spencer’s belief in Intelligent Design creationism is indeed outside the scientific mainstream, but IMO does not discredit his work in atmospheric science & on oceanic contribution to the climate system, nor should it, despite Sen. Whitehouse’s scurrilous behavior. I also don’t think Dr. Spencer is a young earth creationist.
I would cheerfully debate Roy on any form of ID vs natural evolution because whatever his opinions there, they are either question-begging or wrong from the point of view of correctly applying inference and reason. To put it precisely, it is very improbable that ID is correct based on the current evidence. For a very very large value of very. At one point or another he indicated his willingness to debate the issue and as my kids would say, “Challenge accepted…”
But as noted, that doesn’t credit or discredit his work in climate science, except insofar as that it justifies the observation that he could use a refresher course in probability theory and scientific inference and perhaps complexity theory as well as most of the other workers in climate science.
Regarding the LTT — it is indeed the gold standard of temperature records regardless of how it is computed if only because it has been measured and computed moderately consistently for roughly 34 years, because it samples the globe as well or better than any other measure (by far!), because it is not corrupted by UHI influences that have been shown to be far greater than any systematic sampling error it is likely to have, and because it hasn’t been subjected to numerous “corrections” that always have the effect of cooling the past and/or warming the future. That is why it is a problem for the land record — a serious problem at that — if LTT (however it is computed) remains nearly stable at its current rate of growth, the divergence between LTT and the land record will for all practical purposes render the latter completely unreliable instead of being merely probably unreliable as it is today.
Like all such measures, it may or may not be precisely accurate or measure what it claims to measure, like all such measures, it has error bars. But the curve that Roy presented at the Senate hearing is really little different from figure 1.4 (IIRC) in AR5 straight from the mouth of the IPCC:
http://wattsupwiththat.com/2012/12/14/the-real-ipcc-ar5-draft-bombshell-plus-a-poll/
It shows the actual mess of sphaghetti where 1.4 does not, the error bars on the black squares are some kind of complex joke (being all the same size and place arbitrarily around actual data points) and the grey area is AFAICT completely irrelevant. If you overlaid LTT on top of this with a similar starting point, it would change nothing — neither one shows any statistically significant growth from the 1998 Super ENSO on, and show little growth from 1990 to the present (order of 0.1 C/decade or less).
The spaghetti, however, is very instructive, especially when labelled with the individual GCMs associated with each strand. Why do you think the IPCC is backpedalling so hard on climate sensitivity? Because their own figure, not Spencer’s, already is pretty convincing evidence that the high sensitivity models are plain old wrong, the intermediate sensitivity models are still possible but looking less so every day the current neutral temperature trend continues, and the low sensitivity/neutral feedback models are actually fitting the observed temperature trend better than anything else.
This does not, of course, disprove AGW, CAGW, GW without the A — the data simply speaks for itself. A reasonable person can look at figure 1.4 of AR5 and doubt that the models are correct. A reasonable climate scientist can look at 1.4 of AR5 and ask themselves and doubt that the models are correct. Indeed, I don’t see how this figure could ever stand as evidence for the null hypothesis “The GCMs are correct, based on settled science, so you can bet a trillion dollars and a hundred million lives to be spent right now (not in 100 years) on their predictions and be sure of being a winner”.
rgb
Mr. Brown, the nuclear power industry in the United States has no interest in pursuing thorium reactors, for the simple reason that the thorium fuel cycle has no cost advantages over the current uranium fuel cycle.
I categorically disagree with this statement. High pressure Uranium reactors can melt down and require active cooling. They were a stupid design in the first place. LFTR reactor designs cannot melt down — they can be designed to passively shut off by flicking a switch without any active cooling. Finally, they use a liquid fuel. They therefore would require a small fraction of the current money being spent on a high pressure, actively cooled reactor fueled with solid fuel. Indeed, one can go down a rather long list of technical and engineering advantages LFTR has over pressurized water reactors. Add to that the comparative difficulty of nuclear proliferation via thorium, the fact that LFTR reactors can burn nuclear waste while operating, and the fact that thorium is abundant and mixed in with rare earths that we need a domestic supply of anyway (where currently thorium is considered to be a kind of radioactive contaminant of the rare earths, not a valuable metal in its own right outside of e.g. gas lantern mantels).
As for the other reasons that nuclear is fighting for its life, don’t forget d) the same “green” people that oppose carbon oppose nuclear, as a knee jerk measure, but sure, I agree with the others. They do not constitute any good reason not to invest a fair bit of money, quickly, in developing LFTRs on at least a competitive basis with e.g. solar photovoltaics. Or, of course, we can just wait five years and license the technology from China and pay them forever for stuff we should have developed and own ourselves. Or watch as they take over the world energy market. Even if LFTR should in the end prove unfeasible, it’s a cheap way to cover the bet and frankly, it is a pretty good bet.
rgb
The Dover, PA trial produced humiliating, hilarious, incontrovertible documentary evidence that ID is just Old Time Creationism repackaged to sound more science-y, so that it can be slipped into public school science classes. Dr. Behe’s concept of “irreducible complexity” is not just unscientific, but anti-scientific. No surprise since by his definition, astrology is scientific.
As above, Dr. Spencer’s contributions to climatology are significant & I salute him for them, but I’m not surprised he hasn’t accepted your challenge on ID. His argument that observed microevolution in bacteria still leaves them bacteria, ie the same “kind” of organism, is equivalent to saying that amoebae, humans & redwood trees are all eukaryotes, so no macroevolution has occurred. As you know the three currently recognized domains of life are bacteria, archaea & eukaryota. Recent discoveries may require adding giant viruses to that list. Creationists have not & cannot define “kind”.
=====================================
The true measure of whether or not a technology for generating electricity is viable technically and economically is whether or not private investors will bet their money on that technology in the absence of government subsidies, direct or indirect.
In the absence of substantial government subsidies, direct or indirect, private investors in the United States will not invest their money in thorium reactors because they see no cost advantages emanating from your list of theoretical technical and operational advantages.
As for the anti-nuclear activists, they have a proper role to play in the nuclear regulatory scheme of things. The anti-nuclear activists have never been successful in challenging a nuclear plant on issues of basic reactor safety. Where they have been successful in closing a nuclear plant has been in challenging the quality of the work the licensee performed in building, maintaining, and/or operating the nuclear plant.
As things stand today, middle-level managers and senior level managers in the nuclear industry are their own worst enemies when it comes to keeping the anti-nuclear activists off their backs.
Unless the price of natural gas is raised substantially through government intervention in the marketplace, either through taxes or through the combined effects of regulatory actions, then gas-fired generation will continue to drive all other methods of electrical generation out of the marketplace, excepting those methods such as wind and solar that are dictated by public law or by the actions of regulatory agencies.
Beta Blocker says:
July 22, 2013 at 12:44 pm
How about building the LFTRs in Mexico? Carlos Slim’s cement companies would make sure that the government went along at minimal cost.
In the comments on the Senate hearing last week, I saw, for the second time, Robert Brown’s opinion that the average of the IPCC ensemble is just noise. That seems very logical to me, but I am no expert statistician.
The contrary opinion comes from a comment made by Gavin Schmidt at RC. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:
If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will be uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.
The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):
Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.
By contrast, Dr. Brown wrote (as rbg@duke) in the comments on the hearing:
In particular, the average of the GCMs is meaningless. The standard deviation of the distribution of GCM results about this average is meaningless, except as evidence that the GCMs collectively suck. The “fit” of GCMs to the period pre-1990 or thereabouts is not evidence that they are predictive as they were initialized TO fit that period — they can no more hindcast the curve Spencer held up presenting global temperatures over the last 2000 years than they can predict the stock market ten years from today.
Finally, and most damning — the application of hypothesis testing methodology in a statistically permissible way to each GCM, one at a time, to the actual climate data under the null hypothesis “this is a perfect climate model whose results can be trusted” consists of looking at the range of the Monte Carlo results that do form an ensemble and noting what fraction of the individual trajectories match the actual climate. When this is done, the number is very, very small, for nearly every GCM. Indeed, we would be entirely justified in rejecting this null hypothesis for nearly all if not all of the GCMs.
This makes the AR4 summary for policy makers even worse than a mere abuse of statistics. It’s one thing to average over twenty models each one of which is individually in pretty good agreement with the data and hence passes a basic sanity check as being a valid model and then arguing that the mean “could” — “not has according to the theorems of statistics” to but could somehow average over irrelevant but small errors in the details in the implementation of the same basic physics and hence yield a better average than any single model alone. It’s another to average over twenty models that individually fail a basic internal hypothesis test when compared to reality and worse all fail in the same way, consistently coming in far too hot and then assert that the average is meaningful and that the standard deviation of that average is a valid measure of the probable bounds of the future climate.
I am not a credible judge of which opinion is correct. If anyone feels that they are, please educate me.
Leif Svalgaard says:
July 21, 2013 at 7:38 pm
This is an unfortunate thing and will impede progress, although not be a setback to research.
Thanks doc
It prompted me to read two or three papers on the subject, I found
this one interesting. Considering the link between
equatorial electrojet and the equatorial precipitations it is likely that the geo solar interaction
as observed in the natural variability is the ionosphere born , not that you would agree.
Leif writes “UV makes up only a few percent of TSI. The solar cycle variation of UV is only a few percent of the UV. A few percent of a few percent is a few parts in ten thousand.”
A few percent of a few percent is much closer to 1 part in a thousand Leif. And since the sun produces on average about 400W across the entire earth’s surface, that makes the effect about 0.4W
Its not about attributing ALL the warming to one process, its about understanding all the components of warming. And 0.4W is potentially a significant part of that forcing if the CO2 forcing is around 1.1W per sq meter.
Furthermore, the ocean forcing is about 0.4W so you see ignoring the suns direct influence because you dont think it could possibly matter, is crazy.
milodonharlani says….
>>>>>>>>>>>>>>>
You might be interested in this paper: The influence of cloudiness on UV global irradiance (295–385 nm) (It looks at actual data over two years not models) and in this NASA article Jan. 8, 2013: Solar Variability and Terrestrial Climate
E. Swanson says: @ur momisugly July 22, 2013 at 9:46 am
How about explaining why the Arctic summer temperatures have been consistently below average this year. http://ocean.dmi.dk/arctic/meant80n.uk.php
Gail Combs says:
July 22, 2013 at 4:08 pm
I am indeed most interested, & have learned a lot from discussions here. This issue among others needs to be explored in greater detail before IMO presently infant climate science can begin to be put upon a sound experimental & observational footing.
It appears that the two-fold variation in UV which interested me as a possible forcing on oceans in fact could affect climate only via the upper atmosphere & knock-on effects, since that variance is primarily in the higher-energy, shorter-wavelength part of the spectrum. Still, I feel the possibility exists that minor differences in UV at the surface could affect the oceans as well, since IR & visible light penetrate so much less in so many types of seawater & ice.
Thanks for those links, one of which wouldn’t open directly, a problem I have with some .pdfs.
Beta Blocker says: @ur momisugly July 22, 2013 at 12:44 pm
>>>>>>>>>>>>
Nuclear and Natural Gas have the same problem, the luddites WANT us to descend back to the technology 0f the 1800’s. They are very much Anti-technology and that is why Nuclear cost so much.
I am sure Pascal Lamy the World Trade Organization Director General would applaud this young man’s Luddite outlook.
However I am sure all these painfully ardent young urban environmentalists would be the first to scream bloody murder when their electricity is rationed and their jobs evaporate. Since that will be happening soon we shall see if they live up to their mouths.
The EPA and Department of Energy drastically underestimated the effects of the new EPA rulings. Many more plants are closing than anticipated. This means electricity prices will sky rocket and the electric grid could become unstable New Regulations to Take 34 GW of Electricity Generation Offline and the Plant Closing Announcements Keep Coming… According to EPA, …. these regulations will only shutter 9.5 GW of electricity generation capacity.
I am already seeing several power blips a day in NC and it is not due to the weather.
Gail:
Insanity.
I recall over a decade ago when to my amazement I read a CACCAist claim that nuclear power also contributed to global warming. She gave no reason, but I assume from all the concrete, or maybe the steam from cooling towers. Or just because that’s what she wanted to believe.
TimTheToolMan says:
July 22, 2013 at 3:22 pm
A few percent of a few percent is much closer to 1 part in a thousand
The variation of all of TSI is 1 part in a thousand. You want to attribute that to be only due to a variation in UV? most of the UV doesn’t even make it to the surface. The variability of the near-UV [which does make it to the surface] is lower than 1% of the fraction of TSI that is UV. Now, those details don’t really matter. Variations of 1/1000 does translate to a variation of temperature of 1/4000 which out of 288K is 0.07 deg C, so there should be such a solar cycle variation. Most researchers find about that magnitude of the solar cycle change of temperature, so nobody is ignoring the Sun’s ‘direct influence’. It is there, it checks out, and it’s minor.
TimTheToolMan says:
July 22, 2013 at 3:22 pm
And 0.4W is potentially a significant part of that forcing if the CO2 forcing is around 1.1W per sq meter.
The difference is that the 0.4 W/m2 is cyclic, while the 1.1 W/m2 is cumulative. In other words, the solar influence goes up by 0.4 W, then goes down again by the same amount, then goes up, then down, up down, up, down, …The CO2 forcing only goes up, never down. So it CO2 has any effect at all it accumulates with time.