Tisdale schools the website "Skeptical Science" on CO2 obsession

I Scream, you scream, we all scream about CO2

On the SkepticalScience Post “Pielke Sr. Misinforms High School Students”

By Bob Tisdale

OVERVIEW

I rarely visit the website SkepticalScience, which is run by proponents of anthropogenic global warming, because most of their posts are the simple parroting of the findings of others—the IPCC’s AR4, the climate science paper de jour, etc.—as they relate to a global climate driven by anthropogenic forcings, primarily CO2. Unfortunately, while doing a Google blog search of my name, I found SkepticalScience had referred to me and my website in one of theirs, so I had to check it out.

The SkepticalScience post dated November 22, 2011, Pielke Sr. Misinforms High School Students, refers to me by name, or to my posts, a number of times while they attempted to discredit Roger Pielke Sr’s post Q&A For Climate For High School Students. In looking at their comments, SkepticalScience appears to be unable to grasp topics of discussion and fails to disguise their use of the common debate tactic of misdirection. I’ll address those initially in the following.

But the majority of this post deals with the SkepticalScience mantra that “CO2 has indeed been the dominant cause of the change in surface temperature over the past century.” For those who have read my post The IPCC Says, “The Observed Patterns of Warming…, And Their Changes Over Time, Are Only Simulated By Models That Include Anthropogenic Forcing” or who have watched my narrated video included in the post The IPCC Says… – The Video – Part 1 (A Discussion About Attribution)the second part of this post will be familiar. I have simply presented that portion of this post a little differently, so if you have studied either of those two posts, you may wish to scroll down through the illustrations and section headings. You will find new information.

 

SKEPTICALSCIENCE ON ROGER PIELKE SR.’s POST

In their post Pielke Sr. Misinforms High School Students, SkepticalScience includes a question to and a quote from Roger Pielke Sr, which included a link to one of my posts:

Question:

If not [CO2], what led to this change in temperature?

Pielke Answer:

In addition to these human climate forcings, natural climate forcings and feedbacks are also quite important. We need to consider these natural effects as clearly the climate is much more complex than is commonly reported by the media and even the IPCC. For example, the global average temperature anomalies are cooling! See

Highly Recommended Weblog Post By Bob Tisdale Titled “An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model”

So you don’t have to go chasing links, my post that Roger Pielke Sr. referred to was An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model, which was also cross posted at WattsUpWithThat here.

Then SkepticalScience adds their unsolicited opinion under the pretense of “Science”. (Note: I have not presented their Figure 1 in this post, but I provided a link to it.) They wrote:

What the Science Says:

As noted in our previous answer, CO2 has indeed been the dominant cause of the change in surface temperature over the past century.  Dr. Pielke has once again failed to make this crucial point in his answer, instead choosing to tell these high school students that the media and IPCC are disregarding the complexity of ‘natural effects‘ (without providing any evidence to support this assertion).  Dr. Pielke then repeats the cherrypicking-based myth that global temperatures are cooling (see Figure 1, and also here and here, where we have already disproven this myth repeatedly for Dr. Pielke), and links to Bob Tisdale’s “skeptic” blog.

Why Dr. Pielke links an obscure blog rather than referencing peer-reviewed literature is a mystery, and a climate scientist should be able to do much better.   Dr. Pielke appears to be becoming a Tisdale cheerleader despite the fundamental flaws in Tisdale’s weblog analyses. Additionally, Tisdale’s blog doesn’t even seem to support Pielke’s false claim of cooling temperatures.

Why Roger Pielke Sr. would link to “an obscure blog rather than referencing peer-reviewed literature is” NOT “a mystery.” Not a mystery at all. Roger Pielke Sr.’s thoughts and opinions about climate change are not dependent on the dogma of CO2-driven global warming—unlike the authors of SkepticalScience posts. Roger Pielke, Sr. does acknowledge that CO2 is one of many factors that impact global surface temperatures. In fact, all one has to do is click on the Main Conclusionslink at Roger Pielke Sr’s blog to determine that. And yes, my blog might seem obscure in the sense that I receive relatively few hits, but SkepticalScience seems to forget, or purposely overlooks the fact, that many of my posts are cross posted at WattsUpWithThat, where page views are about 6 times greater than SkepticalScience. So while my blog may seem obscure to SkepticalScience, many of my posts are not.

Now let’s look at the SkepticalScience claim, “Dr. Pielke appears to be becoming a Tisdale cheerleader despite the fundamental flaws in Tisdale’s weblog analyses”. SkepticalScience comment illustrates their inability to grasp topics of discussion. My post that Roger Pielke Sr. linked was about the failings of the NCAR coupled climate model CCSM4. But SkepticalScience linked Tamino’s unsuccessful attempt to criticize my post17-Year And 30-Year Trends In Sea Surface Temperature Anomalies: The Differences Between Observed And IPCC AR4 Climate Models. The post that Pielke Sr linked and one Tamino attempted to critique are on different topics, yet SkepticalScience can’t fathom the differences. Also, I responded to Tamino’s critique with the post Tamino Misses The Point And Attempts To Distract His Readers. It explained the flaws in Tamino’s post. Miscomprehension and misdirection appear to be common faults of, and tactics with, anthropogenic global warming proponents.

SkepticalScience condemns Roger Pielke Sr for not referring to peer-reviewed papers, but in the preceding paragraph SkepticalScience, ironically, attempts to refute a claim by Roger Pielke Sr with an illustration based on a blogger’s comment, not a peer-reviewed paper, and based on a dataset, BEST land surface temperature anomalies, that had not been peer reviewed at the time of their post. As of December 7, 2011, the papers associated with the Berkeley Earth Surface Temperature project are still listed as having been “Submitted For Peer Review (October 2011)”, but not peer reviewed.

And last, SkepticalScience closes this round of their criticisms with “Tisdale’s blog doesn’t even seem to support Pielke’s false claim of cooling temperatures.” Skeptical Science failed to recognize that my post supported the other portion of Roger Pielke Sr’s answer, which SkepticalScience commented on with “Dr. Pielke has once again failed to make this crucial point in his answer, instead choosing to tell these high school students that the media and IPCC are disregarding the complexity of ‘natural effects’ (without providing any evidence to support this assertion).” [My bold] That is, my post was the evidence supporting Roger Pielke Sr’s comment that the IPCC fails to properly consider natural variables. Yet again, SkepticalScience has highlighted their inability to comprehend a topic of discussion, or has illustrated their need to mislead their readers, or both.

That last sentence pretty much sums up the small portion of the SkepticalScience post in which I have interest. Since it’s likely SkepticalScience employed the same tactics throughout the rest of the post, there is really no reason to look at any more of it. But I did want to note that it appears that SkepticalScience failed to consider something even more basic. Dr. Pielke Sr.’s opinions on climate change and global warming are well known. As noted earlier, all one needs to do is click on the Main Conclusionslink on his blog to discover them. Or you can get a feel for his opinions just by reading his blogs posts. Did SkepticalScience ever stop to consider that the high school students and/or teachers who invited Dr. Pielke Sr to respond to their questions were actually looking for replies that did not conform to IPCC doctrine? That possibly they were looking for opposing concepts—ideas from which they could explore other possibilities and learn?

Now let’s look at the major flaw in the SkepticalScience post.

SKEPTICALSCIENCE’S REPEATED CLAIM THAT CO2 DRIVES GLOBAL TEMPERATURES

-alternate subheading-

FORCED AND UNFORCED VARIATIONS IN GLOBAL SURFACE TEMPERATURE ANOMALIES OVER THE 20th CENTURY

SkepticalScience makes the claim multiple times in their Pielke Sr. Misinforms High School Students post that Carbon Dioxide is the dominant driver of Global Surface Temperatures over the past century. One was quoted above in the overview, another reads, “CO2 isn’t just ‘a warming influence,’ it’s the main warming driver.” [Their boldface and italics.] And that’s not unusual; many of the posts at SkepticalScience have an undercurrent of CO2 as the primary driver of surface temperatures. One would think SkepticalScience would have checked the global surface temperature record before groaning on and on with the same nonsensical mantra. Carbon dioxide is one of the many climate forcings that drive climate models, but climate models do such a poor job of replicating the global surface temperature record, they provide no support for the SkepticalScience assumptions about CO2.

If you were to search the SkepticalScience website for posts that refer to the IPCC, you’d get a couple hundred results. Search SkepticalScience for the website RealClimate and you’d find over fifty posts. So one could conclude that SkepticalScience uses the IPCC and RealClimate to support their claims. In the following discussion, I’ll use one quote from RealClimate and four quotes from the IPCC to support mine. And I’ll also support mine with graphs of the observational and model data used by the IPCC that is readily available to the public through the KNMI Climate Explorer. Anyone with a basic knowledge of spreadsheet software, if they were to take the time, can replicate the graphs in this post.

The first quote is from a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS) on the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed the question, “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?” Gavin Schmidt replied:

“Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”

That quote from Gavin Schmidt will serve as the basis for our use of the IPCC multi-model ensemble mean in the linear trend comparisons that follow the IPCC quotes. As I noted in my recent video The IPCC Says… Part 1 (A Discussion About Attribution), in the slide headed by “What The Multi-Model Mean Represents”, Basically, the Multi-Model (Ensemble) Mean is the IPCC’s best guess estimate of the modeled response to the natural and anthropogenic forcings. In other words, as it pertains to this post, the IPCC model mean represents the (naturally and anthropogenically) forced component of the climate model hindcasts. (Hopefully, this preliminary discussion will suppress the comments by those who feel individual models runs need to be considered.)

The first IPCC quote is from the IPCC AR4 Working Group 1 Summary for Policymakers. It’s from the fourth bullet-point paragraph under the heading of “Understanding And Attributing Climate Change” (page 10). The paragraph reads in full [my bold face]:

“It is likely that there has been significant anthropogenic warming over the past 50 years averaged over each continent except Antarctica (see Figure SPM.4). The observed patterns of warming, including greater warming over land than over the ocean, and their changes over time, are only simulated by models that include anthropogenic forcing. The ability of coupled climate models to simulate the observed temperature evolution on each of six continents provides stronger evidence of human influence on climate than was available in the TAR. {3.2, 9.4}”

The IPCC further clarified and reinforced that statement in Chapter 9 Understanding and Attributing Climate Change, under Heading of “9.4.1.2 Simulations of the 20th Century”, where they wrote:

“Figure 9.5 shows that simulations that incorporate anthropogenic forcings, including increasing greenhouse gas concentrations and the effects of aerosols, and that also incorporate natural external forcings provide a consistent explanation of the observed temperature record, whereas simulations that include only natural forcings do not simulate the warming observed over the last three decades.”

Cells A and B of the IPCC Figure 9.5 are presented here as Figures 1 and 2. For a detailed description of those graphs, see here. The IPCC obviously provided Figure 9.5 as evidence that observational and modeled data support their claims. Referring to Figure 1, they’ve shown instrument-based global land plus sea surface temperature anomalies from the HADCRUT3 dataset in black. They surrounded the variations in the observed data with the noise of the individual climate model ensemble members, with the yellow spaghetti. And the IPCC has shown their best-guess estimate of the natural and anthropogenic-forced component of the rise in Global Surface Temperatures with the Multi-Model Ensemble Mean, the red curve, which mimics, at times, the long-term and short-term variations in Global Surface Temperature anomalies.

Figure 1

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 2

Those quotes and illustrations seem to support SkepticalScience’s CO2 mantra. Let’s see if they do, later. A few more quotes first.

The third IPCC quote is from Chapter 3 Observations: Surface and Atmospheric Climate Change. Under the heading of “3.2.2.5 Consistency between Land and Ocean Surface Temperature Changes”, the IPCC states with respect to the surface temperature variations over the period of 1901 to 2005 (page 235):

“Clearly, the changes are not linear and can also be characterized as level prior to about 1915, a warming to about 1945, leveling out or even a slight decrease until the 1970s, and a fairly linear upward trend since then (Figure 3.6 and FAQ 3.1).”

The IPCC has loosely defined the warming and flat temperature periods. That fact should help minimize any claims that I’ve cherry-picked the start and end years I’ve used in the linear trend comparison graphs that I’ve presented later in the post. In Figure 3, I’ve marked up the IPCC Figure 9.5 to show the warming and flat temperature periods. Based on the HADCRUT3 data the IPCC used in the Figure 9.5, the years that mark the change from the flat temperature periods to warming and vice versa are 1917, 1944, and 1976.

Figure 3

And last but not least is the fourth quote from the IPCC. In Chapter 9 Understanding and Attributing Climate Change, under the heading of “9.4.1.2 Simulations of the 20th Century”, the final paragraph (page 686) begins:

“Modelling studies are also in moderately good agreement with observations during the first half of the 20th century when both anthropogenic and natural forcings are considered…”

“Moderately good agreement”? The IPCC could not have written anything much vaguer than that, while still putting the models in a positive light. But the obvious intent of the sentence is to reinforce how well the models agree with observations during the first half of the 20thCentury. “Good” is the key word.

But scroll up to Figure 3. It sure does look like the Global Surface Temperatures warm at a much faster rate than the model mean during the early warming period of 1917 to 1944. In other words, the trend of the forced component during that period was only about one-third the actual trend of the rise in observed surface temperatures. To say that the models are in “moderately good agreement” with observations during that period is an overstatement, to put it nicely. And it also looks like the observations warm in the early and late warming periods at rates that are very similar. If anthropogenic forcings are the dominant cause of the rise in global temperatures, why didn’t temperatures in that late 20thCentury warming period warm significantly faster than they did in the early warming period?

Before we document how poorly the models simulate the rise in temperature during the early warming period, an epoch acknowledged by the IPCC, we need to discuss…

THE MODEL DATA PRESENTED IN THIS POST

Climate modeling groups submitted the results of their modeling efforts to an archive for use by the IPCC for AR4. The archive is known as the CMIP3, standing for Phase 3 of the Coupled Model Intercomparison Project. And the 20th Century hindcasts were known as 20C3M. For their comparison graph of observed global surface temperature and model outputs, Figure 9.5 cell a, the IPCC did not use all of the 20C3M models available in the CMIP3 archive. They used twelve, including CCSM3, ECHO-G, GFDL-CM2.0, GFDL-CM2.1, GISS-EH, GISS-ER, INM-CM3.0, MIROC3.2(medres), MRI-CGCM2.3.2, PCM, UKMO-HadCM3, and UKMO-HadGEM1. Many of the listed climate models consisted of multiple ensemble members. For example, the GISS-ER hindcast of the 20th Century included 9 ensemble members. The same held true for the 12 models the IPCC excluded; some included multiple ensemble members. The CMIP3 climate models (20C3M) excluded from Figure 9.5 cell a are the BCC CM1, BCCR BCM2.0, CGCM3.1 (T47), CGCM3.1 (T63), CNRM CM3, CSIRO Mk3.0, CSIRO Mk3.5, GISS AOM, FGOALS g1.0, IPSL CM4, MIROC3.2 (hires), and ECHAM5/ MPI-OM. The IPCC provides further information about the models in their Supplementary Materialfor Chapter 9.

The IPCC notes why they exclude many of the models in Chapter 9 of AR4. Refer to the caption of Figure 9.5. There they write:

“Simulations are selected that do not exhibit excessive drift in their control simulations (no more than 0.2°C per century).”

However, there is another very obvious reason those models were excluded. It’s visible when the Multi-Model Ensemble Mean of the models that were included is compared to the mean of those that were excluded. Refer to Figure 4. The excluded models are missing the majority of the dips and rebounds associated with volcanic eruptions.

Figure 4

Assuming these models did not use volcanic aerosols as a forcing, the IPCC was wise to exclude them from the ensemble mean. Without the dip and rebound associated with the eruption of Agung in 1963, for example, the models that excluded the volcanic aerosols would not have produced a flat or slightly negative trend during the period of 1944 to 1976. See Figure 5. And that would have impacted how well the models matched the observations during that period.

Figure 5

Base Years: The IPCC used the base years of 1901 to 1950 for Figure 9.5. The same base years were used for the data presented in the following.

Last, the majority of the 20th Century hindcasts only provided data as far as 1999 or 2000, but the AR4 Figure 9.5 included data through 2005. The IPCC, as they explained, spliced on the corresponding projections from the Climate Models, where they were available. I have not spliced on the projection data in the following graphs to extend the model mean data through 2005, since our primary interest is the early warming period. I have ended the data in 2000. If that concerns you, refer to the graphs presented in The IPCC Says… – The Video – Part 1 (A Discussion About Attribution). My ending the data in 2000 in this post makes little difference to this discussion.

THE IPCC (CMIP3) DATA CONFIRMS THE SIGNIFICANT DIFFERENCE BETWEEN THE OBSERVED AND MODELED TEMPERATURE TRENDS DURING THE EARLY WARMING PERIOD

And the IPCC data confirms that the models can produce temperature trends similar to those observed during the mid-20thCentury flat period and late warming period.

Figure 6 compares the linear trends of the observed global surface temperature anomalies and the forced component of the CMIP3 models used by the IPCC in Figure 9.5 of AR4 during the later warming period of 1976 to 2000. As discussed earlier, the multi-model ensemble mean represents the (naturally and anthropogenically) forced component of the outputs of the CMIP3 climate models the IPCC selected for use in their Chapter 9 hindcast comparisons. The IPCC described this period as “fairly linear upward trend,” and other than the volcano and ENSO wiggles, that’s a reasonable portrayal. The observations warmed at a slightly faster rate than the model mean during this warming period, but the linear trend of the forced component of the models is quite similar to the observed trend.

Figure 6

Now we’ll look at the period described by the IPCC as “leveling out or even a slight decrease until the 1970s.” See Figure 7. The forced component of the models and the observations both have flat to slightly negative linear trends for the period of 1944 to 1976.

Figure 7

Unfortunately for the IPCC, those latter two periods are the only two where the models seem to match the observations. The forced component of the climate models only rises as a rate that is about 32% of the observed trend in Global Surface Temperatures during the early warming period of 1917 to 1944. See Figure 8. The IPCC acknowledges that the early warming period exists, yet the forced component of their models (the model mean) does not produce a warming at a rate that is anywhere near to the observed rate. This illustrates that global surface temperatures are capable of warming at a rate that is three times higher than the forcing. Or, viewed another way, the unforced component of the rate of rise in global surface temperatures can be twice as high as the forced component, assuming the unforced component is equal to the difference between the linear trend of the model mean and the trend of the actual rise in surface temperature anomalies.

Figure 8

Figure 9 shows how the models fail to capture the cooling that took place in the decade-plus period of 1901 to 1917. The linear trend of the observations is negative during this period, but the trend of the forced component of the models is relatively high, second only to the trend of the late warming period.

Figure 9

ON THE IPCC’S CONSENSUS (OR LACK THEREOF) ABOUT WHAT CAUSED THE EARLY 20th CENTURY WARMING

When I presented the fourth quote from the IPCC above, it was incomplete. It ended with an ellipse, indicating the sentence continued. That was done so that we could discuss separately the lack of consensus about what forced the rise in temperatures during the early warming period. The IPCC AR4 provides three competing sources for the rise in global surface temperatures during the first half of the 20thCentury: solar, volcanic aerosols, and internal variability. The fourth IPCC quote reads in full [my boldface]:

“Modelling studies are also in moderately good agreement with observations during the first half of the 20th century when both anthropogenic and natural forcings are considered, although assessments of which forcings are important differ, with some studies finding that solar forcing is more important (Meehl et al., 2004) while other studies find that volcanic forcing (Broccoli et al., 2003) or internal variability (Delworth and Knutson, 2000) could be more important.”

Meehl et al (2004) “Combinations of natural and anthropogenic forcings in 20th century climate” used the obsolete Hoyt and Schatten TSI reconstruction for their solar forcing. This disqualifies Meehl et al (2004) as a credible reference for what caused the warming during the first half of the 20th Century. Basically, the Hoyt and Schatten Total Solar Irradiance (TSI) reconstruction was created to explain the warming during that period, so it comes as no surprise that a climate study using the Hoyt and Schatten TSI reconstruction would find solar to be an important factor in the warming that took place then. Figure 10 is from the post IPCC 20th Century Simulations Get a Boost from Outdated Solar Forcings. (That post was also cross posted at WattsUpWithThat, where you can refer to the comments by Dr. Svalgaard, who is a Solar Physicist from Stanford University.) In Figure 10, the TSI reconstructions were scaled assuming the solar cycle amplitude for the last three complete cycles was approximately 1 watt/meter^2, and that those variations caused a 0.1 deg C variation in Global Surface Temperature. It is very obvious that the Hoyt and Schatten reconstruction also attempted to explain part of the flattening or decrease in temperature during the mid-20th Century with their dataset. And to further disqualify Solar forcing as the cause for the early 20th Century warming, the current understandings of solar variability are leaning toward the possibility that there has been no change in solar minimum, as represented by the Svalgaard data; that is, there are indications that the upward trend in Total Solar Irradiance in the early part of the 20thCentury does not exist. Additional confirmation of that in a few moments.

Figure 10

The IPCC represents that the findings of Broccoli et al (2004) were that volcanic forcings could be more important than solar in the modeling of the global surface temperatures during the first half of the 20th Century. This is incorrect. Refer to the last sentence of the first paragraph of the “7. Discussion” from Broccoli et al (2004) Twentieth-century temperature and precipitation trends in ensemble climate simulations including natural and anthropogenic forcing. It reads:

The addition of natural forcings bring the trends during these shorter periods into better agreement with the observed record, with solar forcing the key addition during the 1900–1940 period and volcanic forcing the more important contributor from 1940–1997.

Broccoli et al (2004) used Lean (2000) TSI data. However, Lean et al (2002) The effect of increasing solar activity on the Sun’s total and open magnetic flux during multiple cycles: Implications for solar forcing of climate” presented different findings. Unfortunately, I have not found a copy of Lean et al (2002) that isn’t hidden behind a paywall. So we’ll have to rely on the Wikipedia discussion of it from their Solar Variationwebpage. [My boldface.]

In 2002, Lean et al.[60] stated that while “There is … growing empirical evidence for the Sun’s role in climate change on multiple time scales including the 11-year cycle”, “changes in terrestrial proxies of solar activity (such as the 14C and 10Be cosmogenic isotopes and the aa geomagnetic index) can occur in the absence of long-term (i.e., secular) solar irradiance changes … because the stochastic response increases with the cycle amplitude, not because there is an actual secular irradiance change.” They conclude that because of this, “long-term climate change may appear to track the amplitude of the solar activity cycles,” but that “Solar radiative forcing of climate is reduced by a factor of 5 when the background component is omitted from historical reconstructions of total solar irradianceThis suggests that general circulation model (GCM) simulations of twentieth century warming may overestimate the role of solar irradiance variability.”

A factor of 5.

That leaves us with natural variability. The IPCC cites Delworth and Knutson (2000), Simulation of Early 20th Century Global Warming. The final paragraph of Delworth and Knutson (2000) begins:

If the simulated variability and model response to radiative forcing are realistic, our results demonstrate that the combination of GHG forcing, sulfate aerosols, and internal variability could have produced the early 20th century warming, although to do so would take an unusually large realization of internal variability. A more likely scenario for interpretation of the observed warming of the early 20th century might be a smaller (and therefore more likely) realization of internal variability coupled with additional external radiative forcings.

In other words, Delworth and Knutson (2000) conclude that is more likely the forcings could be incorrect, blaming the rise on some additional unknown forcings.

To sum up this section, the IPCC cited an inconclusive reference, Delworth and Knutson (2000); they cited a reference incorrectly and one that used as obsolete Total Solar Irradiance dataset, Broccoli et al (2004); and they cited a reference that incorporated an obsolete Total Solar Irradiance reconstruction, Meehl et al (2004).

MAYBE IT’S THE INSTRUMENT-BASED OBSERVATIONS DATA

Another possibility is that the trend of the observations is too high during the early warming period of the 20th Century. This appears to have been one of the Hadley Centre’s considerations when they revised their Sea Surface Temperature data recently, because the early warming period trend of the HADSST3 data is less than its predecessor, HADSST2 data. The Hadley Centre’s SST dataset update was discussed in the post An Introduction To The Hadley Centre’s New HADSST3 Sea Surface Temperature Data. The HADSST3 data was presented in a two-part Kennedy et al (2011) paper Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 1: measurement and sampling uncertainties AND Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 2: biases and homogenisation.

Let’s see what impact the changes from HADSST2 to HADSST3 would have on the model-data comparisons. Keep in mind, the changes also significantly impacted the mid-century flat period. It would not be considered flat with the HADSST3 data, because the Hadley Centre corrected for the 1945 discontinuity found in the paper Thompson et al (2008), Identifying Signatures of Natural Climate Variability in Time Series of Global-Mean Surface Temperature: Methodology and Insights.

For the combined land plus sea surface observational data in the following four graphs I assumed the CRUTEM3 (land surface temperature anomaly) data represented 27% of the surface area of the globe, while the HADSST3 data made up the other 73%. (For the weighting refer to Figure 1 of Notes On The GISTEMP Ratio Of Land To Sea Surface Temperature Data.)

The HADSST3 data lowered the combined land-plus-sea surface temperature observations slightly during the late warming period, Figure 11, so that they are a little more in line with the forced component of the models during this period.

Figure 11

The correction of the discontinuity in 1945 in the HADSST3 data plus the additional changes during the period of 1944 to 1976 have now caused a significant negative trend in the observations-based data. See Figure 12. The linear trend of the forced component of the models is now overestimating the rise in Surface Temperature anomalies by a significant amount, approximately 0.046 deg C per decade. These HADSST3 corrections, assuming they themselves are correct, show that the global surface temperatures are also capable of declining at a rate that is not projected by the forced component of the models.

Figure 12

And in Figure 13 we can see the impact of the HADSST3 corrections on the linear trends of the observational data for the early warming period of 1917 to 1944. Instead of the combined land-plus-sea surface temperature anomalies rising at a rate that was 3 times higher than projected by the models, the HADSST3 corrections have lowered the difference so that the observations are now rising at a, still significant, rate that’s 2.6 times higher. The difference between the observed and the model mean trends is still about 0.09 Deg C per decade. That’s a major difference when one considers that the trend of the observations data (CRUTEM3+HADSST3) from 1901 to 2000 is only 0.057 deg C per decade.

Figure 13

And the HADSST3 corrections also decreased the difference between the observations and the forced component of the models during the period of 1901 to 1917. Refer to Figure 14. Instead of a 0.1 deg C per decade trend difference between the models and the HADSST2-based combined land-plus-sea surface temperature observations, the trend difference is 0.077 deg C per decade with HADSST3 corrections. But again, that difference is significant.

Figure 14

The HADSST3 Sea Surface Temperature anomaly data, when combined with the CRUTEM3 data, actually cause the forced component of the models to fall out of apparent agreement with the observations for another epoch. When HADSST2 serves as the Sea Surface Temperature data source in a combined land-plus-sea surface temperature dataset, the forced component of the models is in reasonable agreement with observations during the mid-20th Century flat temperature period and the late warming period. But with HADSST3 in its place, the forced component of the models only appears to agree with the observations during the late warming period. That might leave some with the impression that the climate models used by the IPCC in AR4 were tuned to match HADCRUT3 data during the last half of the 20thCentury.

THE TRENDS DURING THE EARLY AND LATE WARMING PERIODS

The observed linear trends of the instrument-based global temperature anomaly data during the early and late warming periods are basically identical. Based on the HADCRUT data, the linear trend of the early warming period of 1917 to 1944 (Figure 8 ) was 0.174 deg C per decade, while the trend of the late warming period of 1976 to 2000 (Figure 6) was 0.176 deg C per decade. Yet the trends of the model mean, which represents the forced component of the IPCC models, are significantly different during the two warming periods. The trend of the forced component of the models during the latter warming period was about 2.9 times greater than the early warming period trend. In other words, the increase in anthropogenic Carbon Dioxide and other forcings during the late warming period caused the modeled trend to be 2.9 times higher in the late warming period than they were in the early warming period, but there was no change between early and late warming periods in the rate at which observed temperatures actually rose. Even more basically, the observed trends in global surface temperatures during the early and late warming periods, because they are the same, do not support the hypothesis of anthropogenic global warming.

PART 2 OF THE FORCED AND UNFORCED VARIATIONS PORTION OF THIS POST

I would have liked to have included additional comparisons, but this post is long enough as it is. In part 2, using the “ENSO fit” and “volcano fit” data from Thompson et al (2008), we’ll adjust the observations and the model mean data to determine their impact, if any, on the trend comparisons during the four epochs of the 20thCentury. In another set of comparisons, we’ll replace the HADCRUT observations with the mean of HACRUT3, GISS LOTI, and NCDC land-plus-ocean surface temperature anomaly datasets, just to assure readers the disparity between the models and the observations is not a function of the HADCRUT surface temperature observations dataset that was selected by the IPCC for use in their comparisons. And we’ll compare model projections and observations for global sea surface temperature anomalies, but we’ll extend both datasets back to 1880 to also see how well the forced component of the models matches the significant drop in global sea surface temperatures from 1880 to 1910. And I’ll be using the average SST anomalies of HADISST, HADSST2, HADSST3, ERSST.v3b, and Kaplan. I would have preferred to go back to 1870 to capture the entire decrease, but the HADSST2 data is very sparse before 1880 and some of the models only extend back to 1880 as well.

For additional discussions that illustrate the failings of the climate models used by the IPCC refer to the posts here, here, here, here, here, here, here, here, and here.

CONCLUSIONS ABOUT THE FORCED VERSUS UNFORCED VARIATIONS IN GLOBAL SURFACE TEMPERATURES

As illustrated and discussed in numerous posts at Climate Observations, the climate models used by the IPCC in AR4 show little skill at recreating the variations in global surface temperature over the 20thCentury. This post reinforces that fact.

The IPCC, in AR4, acknowledges that there were two epochs when global surface temperatures rose during the 20th Century and that they were separated by an epoch when global temperatures were flat, or declined slightly. Yet the forced component of the models the IPCC elected to use in their hindcast discussions rose at a rate that is only one-third the observed rate during the early warming period. This illustrates one of the many failings of the IPCC’s climate models, but it also indicates a number of other inconsistencies with the hypothesis that anthropogenic forcings are the dominant cause of the rise in global surface temperatures over the 20th Century. The failure of the models to hindcast the rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. Additionally, since the observed trends of the early and late warming periods during the 20thCentury are nearly identical, and since the trend of the forced component of the models is nearly three times greater during the latter warming period than during the early warming period, the data also indicate that the additional anthropogenic forcings that caused the additional trend in the models during the latter warming period had little to no impact on the rate at which observed temperatures rose during the two warming periods. In other words, the climate models do not support the hypothesis of anthropogenic forcing-driven global warming; they contradict it.

ABOUT: Bob Tisdale – Climate Observations

SOURCE

The HADCRUT3, CRUTEM3, and HADSST3 data and the individual ensemble members of the CMIP3 climate models used in the multi-model mean data in this post are available through the KNMI Climate Explorer. The HADCRUT3, CRUTEM3, and HADSST3 data are found at the Monthly observations webpage, and the model data is found at the Monthly CMIP3+ scenario runs webpage.

Advertisements

91 thoughts on “Tisdale schools the website "Skeptical Science" on CO2 obsession

  1. Back in 2004, “The Scream” was stolen. Although the painting was recovered the voices heralding the dangers of C02 have been much softer.

  2. Yep, and yet another episode in the 19th Century with temperature rise at the same rate as the two rises in the 20th Century. Where’s the CO2 signal. Where’s the warming?
    =============

  3. My ambition is to one day get all the way through a Bob Tisdale post without getting confused, lost or distracted.
    I made it about 1/2 way through this one.

  4. A graph of this century warming (or lack of there of) would be interesting too. What is the observed temperature trend during the last decade?

  5. Brilliant job! But SkS would never understand all these investigations. They want neat pat one-liners.

  6. Gavin Schmidt replied:
    “Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”

    He might as well have said
    “So, one more time. The remastat kiloquad capacity is a function: square-root the intermix ratio times the sum of the plasma injector quotion.”
    I would have understood just as much 😐
    I do intend to read this post and understand everything… eventually.

  7. Fundamental facts:
    A model is not the thing. This is true no matter how good the model nor how noble the intentions of the modeler.
    Interestingly, neither are a set of measurements the thing. They are point samples of some selected attribute of the thing that are greatly dependent on many factors other than the thing itself.
    One hopes the model is good and that the measurements truly reflect an actuality of the thing but it is only a hope and not necessarily a fact.
    That we can’t think of anything else and that this is the best we can do is not a substitute for actually knowing and proving your case. Especially when one is advocating the takeover of the global economy, reforming it to the green agenda, and making every inhabitant of the earth subservient to your anxiety about the wild projections of your fevered imagination.
    These facts make your post as irrelevant and immaterial as the entire field of so called climate science. You might just as well argue that 2 + 2 does not equal n for each n not equal to 4. What a thing is, is vastly more important than what it is not. That is except knowing that climate science, as it is currently practiced, is not even in the same universe as actual science.

  8. Where is the evidence that forcing/feedback models as used by Climate Science have skill at prediction? What other time series have been successfully modeled by such an approach?
    For example, has the technique proved successful in predicting the tides, or for stock markets, or for animal populations, or any other time series?

  9. You have created another fabulous post, Mr. Tisdale.
    As regards the skepticalscience blog, they are neither; that is, they understand neither science nor skepticism. They are shameless propagandists for CAGW.
    When skepticalscience is able to get its head mounted on its shoulders, I hope they will celebrate the fact that their shameless promotion of CAGW brought forth this wonderful post by Mr. Tisdale. So, skepticalscience is not entirely worthless.

  10. Gavin appears to think that an ensemble is more accurate than a single model. The idea being that climate models are composed of signal and noise, and by averaging together the signals the noise will cancel out.
    Gavin has neglected one term. All models are composed of signal, plus noise, plus ERROR.
    Unlike noise, error is not random. Error results for a lack of knowledge. All climate scientists share in a common lack of knowledge, as there are aspects of the climate that are yet to be discovered. As a result all models have nearly identical signs for their error terms.
    Thus, when you averaging models together, the errors that result from a common lack of knowledge do not cancel out the way noise does. The error remains.

  11. To illustrate the difference between error and noise, consider that you lived in a world where fractions had not yet been discovered. The actual temperature is 14.4 C
    Because you don’t know about fractional amounts, most people in the world would record this as 14 C. The error would not average out, it would be consistently on the low side of the actual value. Due to noise, some people would record values other that 14C, but that noise would tend to cancel out due to averaging and the law of large numbers. The systemic error that results from incomplete knowledge would remain.
    This is the situation with climate models. The assumption in the models is that error does not occur as a result of lack of knowledge, rather that it occurs in a random fashion, and thus can be cancelled out by averaging. Thus, the underlying assumption in climate model ensembles is that the science is settled, that there is no error, only noise.
    Using this approach we could then average together economic models and predict the stock market years in advance. In this fashion the governments of the world could take a small fraction of what is being spent on climate science, and invest it long term in the market and make a killing. This could then be used to pay off the debt, replace fossil fuel as an energy source, and to reduce taxes to zero.

  12. Mmm. Re Gavin’s comment. Creating an orchestra by assembling a large number of folks who can’t play a musical instrument doesn’t produce a symphony. it produces a cacophony.

  13. The 65-years-average of the sea surface temperature on wft shows no acceleration since the 1930s and a small cooling recently:
    http://www.woodfortrees.org/plot/hadsst2gl/mean:780/plot/hadsst2gl/mean:780/to:1935/trend/plot/hadsst2gl/mean:780/from:1935/trend
    The trend since 1935 is around 0.65ºC/century with no sign of acceleration. That’s 45 years of flat trend, half of what’s left in this century. Expect more of the same, 0.25ºC of warming for 2050 and 0.6ºC of warming for 2100.

  14. Look, this is the skeptical science website. John Cook and all that.. That means it will be horrendously biased, deny any evidence which is not in keeping with it beliefs, cyber bully anyone who tries to have a reasonable conversation and will do more damage to the cause of climate science than all the skeptic sites put together. And this is from a self confessed greeny left wing warmist. I could weep at the antics of these idiots who can only function on catastrophe mode.

  15. Bob – don’t waste your time worrying too much about SS…they will confined to the dustbin of history along with RC.
    They are all cut from the same cloth, Gavin-esq/Bob Ward/Joe Romm hand waving and shouting very loudly (or perhaps in Gavins case whispering twaddle until we all lose the will to live). As the climategate emails show, they are prepared to say the sky is green and the grass is blue in public (I’m thinking the fact that they still cannot admit Manns hockey stick is junk even after reading thier own words say as much in private).
    The war is over, these guys are just ghost dancers.

  16. As we have said for the umpteenth time. Skeptics don’t need to worry, it ain’t warming due to AGW, so long term the AGW is dying a slow death as people notice everywhere latest in Britain, the surveys are showing drops in interest everywhere. All Foia has to do is release the real killer emails to start putting some of these people in jail for their crime. LOL

  17. No, no, Cui Bono, you are describing me. Skeptical Science go for hundred liners, but they get lost and confused halfway through.
    ============

  18. ferd berple says:
    December 7, 2011 at 8:17 am
    ===========================================================================Ferd that is one of the best posts I have read here.

  19. The best five liner I know is Pielke Pere, and the worst five liner I know is Pielke Fils, but he’s sound in wind and limb and coming on strong in the stretch.
    ===========

  20. Great video, Bob.
    As I see it, the simple, underlying issue Bob is highlighting is that climate models cannot explain why temperatures rose from 1916 – 1940ish. (Or fell from 1900 – 1916). If we don’t know that, we cannot be sure that the same factors were not in play 60 years later.

  21. Pielke Pere has the best noneliner today. Go look at his Tibetan Tree Rings. Absence of comment is epic. I get a kick out of wisdom from the lamas. Now, there’s a unifying theme.
    ============================

  22. Hello Bob Tisdale,
    As usual, you have provided a well-reasoned and documented defense of your work. Unfortunately, this is precisely why most of the world will not consider it to be “important”. We live in the age of WWF, RAW Smackdown events and “reality” TV. (Please excuse me if the “pro” wrestling terms are wrong, I really don’t care enough about them to research it.)
    Real reality is just too dry and boring to most people and they quickly lose interest. However, the manufactured “drama” and “augmented-reality” that the “mainstream” popular entertainment programing provides does not suffer the same problem. In order for your message to be palatable to the masses, you will need to “ramp-up” the excitement level.
    I believe that the following items are needed.
    (1) Whine about the unfairness of life.
    (2) Add action to your presentation. You need the figurative equivalent of a folding chair being smacked across someone’s back.
    (3) Drama. Mann does this by painting the skeptics as part of a global, well-funded conspiracy by Big Energy to confuse the “simple-minded public”.
    (4) Always you your “TV announcer” voice. How else will the public know when something is important enough to listen to.
    (5) Deflect your failings onto others. Again just follow Mann’s lead in the fine art of vilifying and demonizing others.
    (6) NEVER admit that you don’t have all the answers. After all, why people listen to you if you *might* be wrong.
    (7) Present minimal graphs and data and explain them less. This way you still get to look scientific without the masses losing interest after the first 10 seconds.
    (8) REMEMBER that the end ALWAYS justifies the means. Ethics and moral issues be damned, after all you are trying to save the world from these lying basturds (the misspelling is intentional). All’s fair when the stakes are this high.
    Best Regards
    /WishingThisWasActuallyNotTrue

  23. So, a website/blog (SkepticalScience) is saying that one should not use a website/blog as a reliable reference.
    Anyone else here having a “LOL” moment over that?

  24. Bob,
    Excellent article. Love the graphs.
    Don’t let those prevaricators get to you. If you look down Anthony’s blogroll on the right sidebar you will see Skeptical Science listed as the only blog under the heading “Unreliable”. The reasons given:
    Due to (1) deletion, extension and amending of user comments, and (2) undated post-publication revisions of article contents after significant user commenting.
    John Cook deviously revises reader comments to mean something entirely different than what the writer intended, and he dishonestly alters the record after the fact without acknowledging that he made changes. He and his moderators do this continuously. They have zero ethics.
    John Cook is engaging in mendacious climate alarmist propaganda, not science. An accurate name for his blog would be “Pseudo-Skeptical Pseudo-Science”. IMHO Cook spreads lies for Fenton payola. But if it were not for their lies, the alarmist cult wouldn’t have much to say.

  25. Heh, I think I might have said:
    I scream,
    You scream,
    We all scream
    For Baby Ice.
    back in the day.
    ==========

  26. That the so badly misnamed ‘SkepticalScience’ is recommended as a go to site by the ‘Team’ tells you all you need to really know about its quality and how objective it is.

  27. ferd berple says
    “To illustrate the difference between error and noise, consider that you lived in a world where fractions had not yet been discovered. The actual temperature is 14.4 C…”
    Further imagine they are using tree rings to take the temperature.

  28. That was a somewhat amusing post (to me).
    Amusing because, as someone who’s been in scientific research for over 25 years (biophysics, not climate physics), its become obvious that scientists who believe in what they are doing and are passionate about what they are doing, when given a chance, will inundate you with more data and analysis and detail than anyone outside the field could hope to take in all at once when explaining things to you. To me, that’s the hallmark of a good scientist. They want to tell you too much, not too little. Because they really want you to understand.

  29. steveta_uk says:
    December 7, 2011 at 8:00 am
    My ambition is to one day get all the way through a Bob Tisdale post without getting confused, lost or distracted.
    “I made it about 1/2 way through this one.”
    I made it through the part above the fold then saw the remainder was over 30 pages and just scrolled through to the bottom. At least there weren’t any of those annoying flashing gifs.

  30. Fred, December 7, 2011 at 8:14 am
    I agree with your statement: “All models are composed of signal, plus noise, plus ERROR.” However, I think you may have misunderstood what Gavin wrote–or maybe I’m the one who has the misunderstanding. As I understand it, Gavin’s comment was a response to the following question:
    “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?”
    Gavin’s response was:
    Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will [be] uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
    The potential misunderstanding involves the words simulationand realisation.
    The words simulaltion, realisation could refer to either (a) the software code that represents a model or (b) a single execution of a model with additive noise.
    If to the person who posed the question the word simulation meant a model and by the average of many simulations he/she meant the average of many different models, then Gavin’s insertion of “noise” into the discussion has no relevance. Call this interpretation: “Interpretation A”.
    However, if to the person who posed the question the word simulation meant a single model and by the average of many simulations he/she meant the average of that single model corrupted by random samples of additive noise, then Gavin’s use of the word “noise” does have relevance. Call this interpreation: Interpretation B.
    Since Gavin, not the person who asked the question, introduced the word “noise” into the discussion, I have to believe Gavin used “Interpretation B”. However, if true, then Gavin’s comment is meaningless. Specifically, to determine the “forced component” of the model in the absence of noise, you don’t have to execute the model multiple times (each time with a different noise sample) and average the results. All you have to do is execute the model without adding noise.
    Bottom line, I believe either (a) Gavin’s response had no relevance to the question asked, or (b) your comment to the effect “that the errors in different models do NOT have to cancel out” is correct. Either way, I believe Gavin made a hash of the question.

  31. 😉 Gotta’ agree Steveta_uk,
    Mr. Tisdale’s posts are hard to digest for we lesser mortals. Even though I managed to power through this one, something kept repeating in the back of my mind all the way to the end.
    How in heck did the RWP and MWP happen if, as SkS says, CO2 is the only thing forcing warming? I can only guess there was a whole lot of natural forces at work!

  32. Sorry, but the whole thing is a tempest in a teapot. I have yet to be convinced that there is any such thing as “average global temperature” or that it has any meaning. Aren’t we talking about a single degree or two C over decades? A change too small to be noticed without precise instruments over a short time frame, let alone “averaged” over decades.
    The WUWT blog graph I learned the most from changed the temperature scale on one of these “global average temperature” graphs to what a typical human would encounter over the course of a normal day. The line was flat.

  33. I always thought the best things about John Cook in general (besides as pointed out above how he states that as a blog that blogs should not be taken as reliable sources…lol) was how he talks about how a meteorlogist (A. Watts) should not be taken as a reliable source in AGW when he is himself a cartoonist.
    I mean, I have heard some funny things in my life, but that probably takes the cake the true hypocrisy of that man. It is like reality just completely went over his head and he never even noticed.

  34. David commenting on Gary: “What he’s saying basically amounts to ‘The noise cancels out.’”
    That noise cancels out is true for averaging truly random noise. Which is a short hand way of saying that it is true for the averaging of a large number of unaccounted for causes of small randomly varying strengths and directions who’s sum trends toward zero. However, systematic errors will be mixed with actual signal and will thereby be indistinguishable. The technical term for that is “confounded”. Separating true signal from systematic error is not easy. In fact, in many cases it well nigh impossible. As near as I can tell, systematic error IS the signal used by so called climate science to justify their alarmist predictions. The systematic error is not accidental. It is inserted into the data stream with intentional purpose to “hide the decline”. The result is not close enough to realty to be false.

  35. Ferd: I agree, statistical averaging in no way cancels systemic errors from a dataset. This has long been one of the things that bothered me about the IPCCs Model Mean method. IF there is a systemic error across ALL the models (oh, let’s say an assumption of all warming being attributable to CO2 with a specific sensitivity or range of sensitivities in general agreement), then the error will not average out as it is evident across all models, varying only in amount not sign.
    I find it hard to believe that PhD Physicists would overlook this key element that is pounded home in any statistical mechanics course. In electronics we are always looking for systematic instrument errors in our data streams and frequently find them when we attempt different methods of making the same measurements. These climate science guys don’t seem to have an interest in locating the errors.

  36. I also gave up on Skeptical Science for the very same reasons as everybody else has listed here. Their dependence on constant parroting was matched by their contempt for anyone capable of original thinking, and if ever there was a subject that needs original thinkers in order to reach greater understanding of all it complexities, it must be climate itself. The Skeptical Science regulars form one of the greatest concentrations of the D-K effect around.

  37. Bob quotes
    “Dr. Pielke has once again failed to make this crucial point in his answer,”
    —————–
    I don’t think Bob understood this sentence. It’s says Pielke failed to include an explicit reference to CO2 in his answer. It does NOT say Pielke does not consider CO2 to be unimportant.
    Yet Bob produces evidence that Pielke DOES consider to CO2 to be important.
    ——————
    The context of Skeptical Science’s statement is their belief that Pielke pushes the logical fallacy that “since kowledge about something is uncertain in some way it means we know nothing about that something”.

  38. Reed Coray says:
    December 7, 2011 at 10:48 am
    “Specifically, to determine the “forced component” of the model in the absence of noise, you don’t have to execute the model multiple times (each time with a different noise sample) and average the results. All you have to do is execute the model without adding noise.”
    ===========================================================================
    This was exactly my thought when I read Gavin’s quote.
    So, Bob, my question is: Have Reed and I interpreted Gavin’s quote correctly?.

  39. Bob quotes
    “Dr. Pielke has once again failed to make this crucial point in his answer, instead choosing to tell these high school students that the media and IPCC are disregarding the complexity of ‘natural effects’ (without providing any evidence to support this assertion).”
    —————-
    Bob seems not to have understood this statement. It is asserting that Pielke is asserting that the IPCC is disregarding effects other than CO2. Bob responds to this by claiming the IPCC does not give sufficient weight to non-CO2 influences on climate, in other words his own work.
    (Disregarding) and (not giving sufficient weight to in my opinion) are NOT the same thing. So Bob seems not to have noticed the difference in meaning.
    It seems to me Skeptical Science might have easily disproven Pielke’ claim by pointing to a significant amount of discussion in the report about influences other than CO2. Assuming such discussion exists.

  40. Bob Tisdale:
    You make a good defence of your true position, but I fail to understand why you have given publicity to the SkepticalScience blog which is merely ‘climate porn’. All such web sites deserve to be starved of attention so they suffer a natural death (as has happened to RealClimate which once was often cited but is now only quoted by AGW-extremists)..
    And the much-discussed comment of the notorious Gavin Schmidt merely ;proves he lacks the inteligence to understand that average wrong is wrong.
    Richard

  41. I find it odd that SkS is criticising Prof Pielke Snr for not citing peer reviewed literature in that post. Anyone who reads Prof Pielke Snr’s blog will see he usually cites several such papers in each blog post. The post immediately before and the one immediately after both address peer reviewed climate papers.
    Well, if Mr Cook has suddenly decided to out-link Prof Pielke to peer reviewed literature I can only welcome his new policy of openness….
    Be warned Mr Cook, you may be dismayed what you find in the peer reviewed literature. I recall this year alone at least 5 such papers came out from 5 different research groups showing solar variance accounts for most of the temperature changes during the last century. CO2 is looking less and less like a significant driver.

  42. Bob quotes
    And last, SkepticalScience closes this round of their criticisms with “Tisdale’s blog doesn’t even seem to support Pielke’s false claim of cooling temperatures”
    ——————-
    I find it curious that the central criticism of Pielke’s statement is bypassed by Bob. It can’t be to hard to state explicitly that Tisdale and Pielke do not agree about the temperature trend, assuming that is a conclusion supported by the evidence of Tisdale and Pielke’s statements.
    Instead Tisdale gets distracted into some really long discussion about his own work.

  43. “The observed patterns of warming, including greater warming over land than over the ocean, …”
    The fact that the Team will admit that sea surface temperatures do not display as much warming as the land surface temperatures do suggests that maybe there is something wrong with the land surface temperature meathods. Maybe UHI or something? Maybe if they removed the AGW forcing from the models, they would match the true warming that is closer to the sea surface temperature record.

  44. I have always wondered why the IPCC temperature spaghetti graphs have grey bars for 4 volcanic eruptions listed for the twentieth century but completely ignore the centuries’ largest eruption in 1912 – Katmai – Novarupta. 17 cubic km of stuff inserted into the atmosphere should not be that hard to ignore. You can kind of see a double dip in the global temps claimed around 1912 before they start increasing again a few years later. Cheers –

  45. Attention, Bob Tisdale:
    Unfortunately, I have not found a copy of Lean et al (2002) that isn’t hidden behind a paywall.
    This looks like it:
    http://academic.evergreen.edu/z/zita/research/07dynamo/articles/Lean2002.pdf

    GEOPHYSICAL RESEARCH LETTERS, VOL. 29, NO. 24, 2224, doi:10.1029/2002GL015880, 2002
    The effect of increasing solar activity on the Sun’s total and open magnetic flux during multiple cycles: Implications for solar forcing of climate
    J. L. Lean, Y.-M. Wang, and N. R. Sheeley Jr.

  46. Since Gavin, not the person who asked the question, introduced the word “noise” into the discussion, I have to believe Gavin used “Interpretation B”. However, if true, then Gavin’s comment is meaningless. Specifically, to determine the “forced component” of the model in the absence of noise, you don’t have to execute the model multiple times (each time with a different noise sample) and average the results. All you have to do is execute the model without adding noise.
    Correct. I made the same point when Bob posted the original piece at his blog.
    To argue that psuedo-randomness in a computer program simulates natural climate variability is nonsense. Random ‘noise’ is only ever random ‘noise’.
    For me this perhaps the most bizzare aspect of the whole climate circus. Hardly anyone questions the nonsensical exercise by the IPCC of averaging out random noise.

  47. Oh, Sahib Gavin, your “by definition” enlightenment, it burns my unworthy eyes no matter whether they look in sample or out:
    “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?” Gavin Schmidt replied:
    “Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”

    Sahib! Do you say the sacred truth is such that if you define “internal variability” just so, as “noise” and “random”, then your equally anointed “forced component” will therefore have to be the driver of warming and climate, and so that reality will bend to your every will?
    Gavin Schmidt replied: “Such is so, Gunga. With our Warming Models, everything you don’t want to be real cancels out, including the Elephants in your own living room! And, Gunga, just for you, 0% APR error included, plus S&H! After all, our Models have also shown ‘experimentally’ in their holiest of clinical studies that it could work!”

  48. Gary Mount says:
    December 7, 2011 at 8:04 am
    Gavin Schmidt replied:
    “Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
    =========
    I think that what Gavin Schmidt is saying is that if you average the results from all models, the errors in the models will cancel. He’s not all wrong. I think that a more correct formulation is “if you average the results from all models some of the errors in the models will probably cancel. The averaged result will include all of the stuff that is common to the models. The influence of stuff that is not common will be reduced.”
    i.e. If the models are not utterly worthless, the average of a number of models is likely to be a better predictor than any randomly selected model. However, keep in mind that it is perfectly possible for one or more of the models to be a better predictor than the average of the models.
    Schmidt is a mathematician and occasionally forgets to translate mathspeak into English — which he actually speaks pretty well most of the time.
    I’m neither a mathematician nor a temperature guy. Personally, I think averaging model results can be a useful tool, but I suspect that in this case it is not going to convert dross into silk. Garbage (probably) In, Garbage Out.

  49. LazyTeenager, regarding your December 7, 2011 at 12:59 pm comment, I have to ask you a very basic question, and please understand that I’m asking it in all seriousness. Is English a second language for you? Many of the bloggers here are multilingual, and one can never tell who’s who. So if I you answer that question, I’ll know how to reply.
    In your December 7, 2011 at 1:13 pm comment, you noted that I bypassed the SkepticalScience complaint about Roger Pielke Sr’s statement that the global average temperature anomalies are cooling. I avoided it because it wasn’t worth the trouble. SkepticalScience did not attempt to determine what time span or dataset Dr Piekle Sr was discussing. They just went off with their boilerplate arguments. It was an easy choice for me to make. The rest of their arguments were so filled with flaws it was much more worthwhile for me to spend my time addressing them.
    Have a nice day.

  50. TomB says:
    December 7, 2011 at 11:19 am
    …. I have yet to be convinced that there is any such thing as “average global temperature” or that it has any meaning.
    ======
    AFAICS, average global temperature is just a metric that allows one to say that 1814 (the year of the last frost fair on the frozen Thames at London) was colder than 1934, and to give an rough indication of how much colder. One has to be very careful taking the number any further than that. I think that you should only compare global temperature differences to other global temperature differences. And even then an 0.2 degree difference between 1900 and 1902 may not be exactly the same as an 0.2 degree difference between 2000 and 2002.

  51. I wrote SkS off as a worthwhile site when Anthony showed how they operate with regard to editing other peoples comments after the fact.
    And even if they didn’t, dana1981 would be enough to poison that site. I’m embarassed by the fact he resides up here in the NW. I was tempted to ask just what his experience in educating students were. Having been a tutor, grad school TA, Head Start volunteer and currently a science education mentor and Junior Achievement instructor, I’m guessing he doesn’t have a clue.

  52. I think that what Gavin Schmidt is saying is that if you average the results from all models, the errors in the models will cancel.
    But what the IPCC are doing is averaging different runs of the same model (as well as runs of different models).
    Climate models are an imperfect representation of the earth’s climate system and climate modelers employ a technique called ensembling to capture the range of possible climate states. A climate model run ensemble consists of two or more climate model runs made with the exact same climate model, using the exact same boundary forcings, where the only difference between the runs is the initial conditions. An individual simulation within a climate model run ensemble is referred to as an ensemble member. The different initial conditions result in different simulations for each of the ensemble members due to the nonlinearity of the climate model system. Essentially, the earth’s climate can be considered to be a special ensemble that consists of only one member. Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model.
    http://www.gisclimatechange.org/runSetsHelp.html

  53. old engineer, you provided a quote from Reed Coray’s comment…
    “Specifically, to determine the “forced component” of the model in the absence of noise, you don’t have to execute the model multiple times (each time with a different noise sample) and average the results. All you have to do is execute the model without adding noise.”
    …then you asked, “This was exactly my thought when I read Gavin’s quote. So, Bob, my question is: Have Reed and I interpreted Gavin’s quote correctly?”
    First, Philip Bradley provided a quote from a webpage that is worthwhile to the discussion. See his December 7, 2011 at 2:57 pm comment:
    http://wattsupwiththat.com/2011/12/07/tisdale-schools-the-website-skeptical-science-on-co2-obsession/#comment-822298
    In the context of what Philip Bradley quoted, my understanding of what Gavin wrote was that he used the word “noise” instead of “variations of the individual ensemble members ‘due to the nonlinearity of the climate model system’”. Noise is much quicker to write. Gavin also used “realisation” instead of “ensemble member”. And referring to the last sentence of Reed Coray’s comment, the noise is not something that’s added. It’s a function of how the model operates.

  54. So from this can I assume :-
    When the models were found to be wildly innacurate with “predicting” the documented past the programmers bolted on some “patches” that attempted to reconcile output with observations but had “only they know what effect” on future trends only to find that any success they had was really only an approximation with nearly as many flaws as before the patching and this allows them to announce “it’s worse than we thought” ?
    Or did I miss something ?
    Is the output from models the only evidence thay have ? Because the way I see it the only proof I have ever seen on SkepticalScience was quoted above – “If not [CO2], what led to this change in temperature?”
    I am so ashamed of my Nationality.

  55. Philip Bradley says:
    December 7, 2011 at 2:57 pm

    But what the IPCC are doing is averaging different runs of the same model (as well as runs of different models).
    =========
    It’s past my nap time and I had a small glass of wine with dinner, but averaging different runs of the same model seems a bit bizarre. I can’t see what that accomplishes other than to find out what the model says with no randomization (why not just turn off the randomization?) and get some probably useless statistics on the run to run variations. Maybe it’ll make more sense after I’ve had some sleep.

  56. ferd berple says:
    December 7, 2011 at 8:17 am
    Gavin appears to think that an ensemble is more accurate than a single model. The idea being that climate models are composed of signal and noise, and by averaging together the signals the noise will cancel out.
    Gavin has neglected one term. All models are composed of signal, plus noise, plus ERROR.
    Unlike noise, error is not random. Error results for a lack of knowledge. All climate scientists share in a common lack of knowledge, as there are aspects of the climate that are yet to be discovered. As a result all models have nearly identical signs for their error terms.
    Thus, when you averaging models together, the errors that result from a common lack of knowledge do not cancel out the way noise does. The error remains.
    …………………………………………………………………………………………………………………..
    Bingo. Well said ferd.
    Excellent post once again Bob Tisdale….. If the warmer catastrophists put in even half the time as you do comparing real observations with models, they would have come to the conclusion long ago that CO2 was not a significant driver of global temperature and that any anthropogenic effect on climate due to the production of anthropogenic CO2, is a massive uncertainty or insignificant a best.
    ….. but of course, there would have been no money or much in the way of accolades in saying that. Like sex, catastrophe sells.

  57. It helps to differentiate noise and randomness.
    Noise is unwanted signal + randomness (although true randomness doesn’t exist, hence psuedo randomness)
    So when Gavin talks about noise, he means noise in the anthropogenic climate signal. By that definition natural variation is noise.
    The climate is a nonlinear (chaotic) system. Digital computer programs are always linear systems. Therefore computer models simulate the nonlinear climate by introducing psuedo-randomness.
    So when Gavin says, ‘and a random realisation of the internal variability (‘noise’)’ what he means is that in the climate models, the introduced psuedo-randomness is approximately equal to natural climate variability, or what he believes is the level of natural climate variability.
    See this link for a good explanation of how the climate models work.
    http://www.climate4you.com/ClimateModels.htm

  58. Don K:
    At December 7, 2011 at 4:03 pm you say:
    “…
    , but averaging different runs of the same model seems a bit bizarre. I can’t see what that accomplishes other than to find out what the model says with no randomization (why not just turn off the randomization?) and get some probably useless statistics on the run to run variations. Maybe it’ll make more sense after I’ve had some sleep.”
    Sorry to disappoint, but sleep will not help because it does not make any sense that the averaging will remove “noise”.
    It is known that all except at most one of the models is wrong because they each use a different value of ‘climate sensitivity’ and compensate for this by each using a different value of assumed ‘aerosol cooling’. In other words, each model emulates a different climate system. (On WUWT I have repeatedly explained this in detail with references to my own and Kiehl’s papers on the subject) .
    But the Earth only has one climate system, so at most only one of the models can be emulating the Earth’s climate system. The modelers know this but do not like to discuss it.
    So, all except at most one of the models – and probably all of the models – are wrong; i.e. they do not emulate the Earth’s climate system.
    As I said above, average wrong is wrong. But the averaging provides discussion which obscures the dirty secret that the models are known to be plain wrong.
    Richard

  59. Long post Bob, but informative. What I got is…
    GCM make CO2 the primary driver of climate.
    The ensembled mean of GCM only works for one period, about 1975 to 2000. They fail the previous similar warming ending about 1945, the fail the cooling before 1917, they are also not showing the flat line in the first 11 years of this century.
    They also fail to match SST which do not fail to follow the time periods mentioned.
    SST are driven by natural cycles, so it is likely natural cycles are a greater driver of temperatures then CO2
    Am I close?

  60. “Dr. Pielke has once again failed to make this crucial point in his answer”, yes of course, he never delivers. That’s why man-made global warming is in such disrepute, its critics never deliver.
    Dr. Pielke has been delivering for a long time now, convincing more and more people that the man-made part is very small and nature dominates the process of either warming or cooling.
    And we have seen both in past century.

  61. SS does the best it can to promote co2 as the only cause of temperature flucuation. The main goal of the site was to get a carbon bill passed in Austrailia. That has happened, and I wish Austrailia the best as they will need it.
    They are very biased in their views and do not accept any data that does not support their view.
    They are what they are.
    Thank you Mr. Tisdale for a well thought out, very plain to read posting. You have addressed several issues that are extremely important, documenting our lack of understanding of clmate influences to date.
    Hopefully there are a few scientists left with an honest interest in this area of science who are looking for the unknowns. The purest example is our early 20th century warming. No one knows why we warmed with any degree of certainty. Let’s keep looking so that we find out.

  62. Gavin Schmidt replied: “Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”
    Translation: “Any one-shot pseudosimulation can be thought of as two components – a good signal we imagine we know how to calculate, and a meaningless number (‘noise’).that jumps up and down, just like real temperature data–woo-hoo! By definition, the random component will be different in every pseudosimulation, and when you average together many examples you get the forced component (i.e. the ensemble mean).”
    Trouble is, the assumed forcing equations are just opinion, and the random components may not be truly random. As Confucius say: “A thousand ladles of garbage do not make one bowl of soup.”

  63. Camburn says: “…The purest example is our early 20th century warming. No one knows why we warmed with any degree of certainty. Let’s keep looking so that we find out.”
    Kevin Trenberth is looking to see if this warming could be teleconnected early 21st Century heat. He’s also looking for the latter behind the tapestry, or something like that.

  64. Bob-
    Thanks for your reply. I’m just beginning to learn about how the models work and what they are supposed to show. Your posts are always helpful.

  65. What this boils down to and Bob shows above is the Forcings Model of Climate Change can’t explain both the 1975 to 2000 period and the other periods.
    This failure, invalidates (disproves) the Forcings Model and it invalidates any predictions made using the Forcings Model, and hence invalidates the predictions from the climate models that are all based on this model.
    That is how science is supposed to work. But I am not holding my breath for climate science to work this way.

  66. David says: “Am I close?”
    Close. GCMs make natural and anthropogenic forcings the primary drivers of climate, but nature does not. CO2, according to SkepticalScience, is the primary anthropogenic forcing. I did not address the first 11 years of this century with this post, or Sea Surface Temperature processes, but I’ve covered them in others, so yes, the observations are once again diverging from the models, and yes, ocean processes, which the models don’t model very well, if at all, are what drives climate.

  67. I posted this to the SkepticalScience thread yesterday replying to Bert’s post about his bodily functions but the moderator deleted my post. They have zero credibility now. I used to be able to have limited debate but skeptics have been totally shut out. It’s 100% propaganda, disinformation, straw men and misdirection. They don’t even obey their own rules, never really did. They have become the name that they try to hang on all who dare to disagree with them.
    —-
    # 35 (Bert from Eltham) What does the disposal of your personal rubbish have to do with the topic of this thread? Where is the moderator and why is this post not subject to the rules in “Comments Policy”?
    I think you will find this link Tisdale’s take to be more on topic here. Even the BEST results, which are inherently flawed, showed no trend in the surface record for the last decade. Why does the surface temperature record follow IPCC scenario C more closely than scenario A or B? Scenario C assumed no CO2 emissions since the year 2000. Why the discrepancy between predictions and observations? How long before you will acknowledge that the supposed CO2 induced global warming (in the surface temperature record) has taken a pause?
    —-

  68. Bob claims
    And the IPCC has shown their best-guess estimate of the natural and anthropogenic-forced component of the rise in Global Surface Temperatures with the Multi-Model Ensemble Mean, the red curve, which mimics, at times, the long-term and short-term variations in Global Surface Temperature anomalies.
    —————
    I suspect that the models may be better at getting the long term trend right than they at reproducing a semblance of the short term fluctuations. However even if they were good at this it will NOT be the case that the ensemble average will reproduce the short term fluctuations. The averaging process will suppress short term fluctuations. Just looking at the graph shows this.
    I don’t know where Bob got this from. Is it his own belief, the belief of the IPCC, some misinterpretation of the IPCC by Bob?
    In fact rereading Bob ‘ssentence, I came to the conclusion that it is so contorted I am not sure a reliable interpretation of what Bob said is possible.

  69. LazyTeenager: This is in reply to your December 8, 2011 at 4:28 am comment. Since you don’t reply to responses and questions to you, the rest of us will have to assume that you are incapable of doing so. I’ll as the question again, is English is a second language for you, because your comprehension skills are lacking?

  70. Frank says: “I posted this to the SkepticalScience thread yesterday replying to Bert’s post about his bodily functions but the moderator deleted my post.”
    I noticed your comment at SkepticalScience yesterday, but forgot to make a screen cap of it. When I went back this morning, it was gone. Thanks for confirming your comment was deleted.

  71. Don K says:
    December 7, 2011 at 2:38 pm
    “I think that what Gavin Schmidt is saying is that if you average the results from all models, the errors in the models will cancel. He’s not all wrong.”
    He is completely, utterly, moronically wrong if that’s what he wants to say. The definition of a chaotic system is that the error (the difference between modeled state and real state) grows exponentially over time; beyond all bounds.
    Averaging x instances (or “realisations”) gives you a noise reduction proportional to the square root of x, concerning the amplitude of the noise. You cannot ever hope to “cancel out the noise” when simulating a chaotic system over sufficiently large timesteps.
    Not even the IPCC climate scientists can be illiterate enough to not know this (it is modeling 101), yet they never quantify it. I presume deliberate deception.

  72. Richard S Courtney says:
    December 7, 2011 at 4:41 pm
    “As I said above, average wrong is wrong. But the averaging provides discussion which obscures the dirty secret that the models are known to be plain wrong.”
    Exactly. They don’t produce model ensemble runs to “cancel out noise” or anything like that but to hinder analysis; it is an exactly ANTISCIENTIFIC tactic in a war that is not about science but about power.

  73. Skeptical Science: “CO2 has indeed been the dominant cause of the change in surface temperature over the past century”.
    As I said to somebody today about this, I guess if you’re SkepticalScience, you throw out the models, until you need them.
    Here’s what GISS Model E’s AGW forcings look like.
    The purple line is net AGW forcings for the model. Which certainly does not support the statement… not even close… that “CO2 has indeed been the dominant cause of the change in surface temperature over the past century”.
    Now one can decide that this model is unreliable, but good luck with your arguments on “dangerous” AGW in this case. And from what place did “CO2 has indeed been the dominant cause of the change in surface temperature over the past century” get pulled from in that case?

  74. “I suspect that the models may be better at getting the long term trend right than they at reproducing a semblance of the short term fluctuations.”
    How much experience do you have building predictive models? The reality is the exact opposite of what you “suspect” — short term trends are easier to get right than long-term trends. This is partly because there’s more data per unit of time, and partly because there are fewer iterations that allow error to accumulate.

  75. I have never seen a more incomprehensible and self contradictory article.
    To describe refereed articles in reputable journals as the ‘paper de jour’ is quite pathetic.
    Here is a copy of what I posted on SKS.
    I wonder if Pielke would think that it would be quite acceptable for me to urinate and defecate in my or any other street. Or maybe his street or front yard. Could I just throw my rubbish anywhere I liked? Say out in the same street. Surely as I am only some miniscule proportion of any large group this would not matter? There is no linkage proved with my waste and any disease or annoyance. So if we all did this and put billions of tonnes of this waste into the environment it has absolutely no harmful effect!
    My body waste and rubbish are very nutritious foods for all sorts of living things such as plants and bacteria and will neatly add to productive food for all!
    In the middle ages people used to throw their rubbish into the street and waste ran down the gutters. It did them no harm in fact they flourished along with rats and other benign native animals. They even grew grapes in Scotland!
    We just do not need sanitation or clean water as it costs far too much. Anyway nature or providence will take care of it.
    There is no evidence that all this expenditure will make an iota of difference!
    All these people worried about ‘pollution’ are just anally retentive. They have a secret agenda for world domination by regulating our lack of anal retention.
    Bert

  76. Bob says
    In other words, the trend of the forced component during that period was only about one-third the actual trend of the rise in observed surface temperatures. To say that the models are in “moderately good agreement” with observations during that period is an overstatement, to put it nicely.
    —————
    Basing an assessment on a comparison of trends is a very dubious process. Trends exaggerate differences. Which seems to be what Bob wants to do.
    Given the level of random variation in both the temperature record and the ensemble mean the error bars on the trend for such short time periods will be large.
    A much more robust and fairer comparison would use an integrating mechanism. E.g. Sum of east squares of the actual curves.

  77. Albert Van Donkelaar says:
    “I have never seen a more incomprehensible and self contradictory article.”
    Since you admittedly are unable to comprehend this interesting article, it’s no surprise that it appears to you to be self contradictory. It’s not. And the lowlife scatological comments you made to Pseudo-Skeptical Pseudo-Science are appropriate for both you and that unreliable propaganda blog.

  78. Albert Van Donkelaar says: I have never seen a more incomprehensible and self contradictory article.
    “To describe refereed articles in reputable journals as the ‘paper de jour’ is quite pathetic.
    “Here is a copy of what I posted on SKS.”
    HHHHHHHHHHHHHHHHHHHHHHH
    You imply that what you wrote at SkepticalScience was directed toward my post. You don’t state it, but you imply it. The problem with that is your comment at SkepticalScience was dated 11:38 AM on 24 November, 2011, more than a week before I published my post at my website, which was the same day that Anthony cross posted it here.
    Also, there’s nothing incomprehensible or self contradictory about my post. You’re just expressing your inability to grasp the topics being discussed. They’re rather simple actually.
    Have a nice day.

  79. It would sure be nice if, when reading a long post and comments piece, one could highlight sentences in order to find them again quickly. Could just copy and paste the whole thing to the word processor if one was doing serious work. But for everyday reading some app to highlight would be really handy.
    Would that be possible?

  80. Albert Van Donkelaar says:
    December 8, 2011 at 2:20 pm
    I wonder if Pielke would think that it would be quite acceptable for me to urinate and defecate in my or any other street.
    Nah, by the looks of your own “work”, the Occupy Wall Streeters are obviously your genetic kin, Baron. But try as you might, your poo is not going to stick around here. So it’s back to Pleasure Island for you and the other Donks….or in other words, Heee Hawwww!

  81. “Skeptical Science is maintained by John Cook, the Climate Communication Fellow for the Global Change Institute (GCI) at the University of Queensland. He studied physics at the University of Queensland, Australia… He is not a climate scientist.”
    “The GCI will contribute to evidence-based, progressive solutions to the problems of a rapidly changing world within the existing and projected frameworks of those problems: political, environmental, social, economic, technical.”
    Source: http://gci.uq.edu.au/AboutUs.aspx
    “GCI board member Ian Buchanan is Sydney based Senior Executive Adviser to Booz Allen Hamilton Australia… Prior to joining Booz Allen, Ian was Regional VP of SRI International (formerly Stanford Research Institute)… Ian is a frequent speaker at international conferences on issues of strategy, leadership, change management and value creation.”
    Source: http://gci.uq.edu.au/AboutUs/OurBoard.aspx
    http://projects.washingtonpost.com/top-secret-america/companies/booz-allen-hamilton/
    http://projects.washingtonpost.com/top-secret-america/companies/sri-international-inc/

  82. Like many climatological insitutions, Climate “Science” is not dedicated to the scientific method of inquiry. Under this method, the claims that are made by a theory (aka model) are refutable (aka falsifiable) by comparison of the outcomes of predicted to those of observed statistical events. Perenially, Climate “Science” fails to make this comparison..

Comments are closed.