Tisdale on Climate Models Confirming Or Contradicting AGW

Part 2 – Do Observations and Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming?

Guest Post by Bob Tisdale

OVERVIEW

This is the second part of a two-part series. There are, however, two versions of part 1. The first part was originally published as On the SkepticalScience Post “Pielke Sr. Misinforms High School Students”, which was, obviously, a response to the SkepticalScience post Pielke Sr. Misinforms High School Students. That version was also cross posted at WattsUpWithThat asTisdale schools the website “Skeptical Science” on CO2 obsession, where there is at least one comment from a blogger who regularly comments at SkepticalScience. The second version of the post (Do Observations And Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming? – Part 1) was freed of all references to the SkepticalScience post, leaving the discussions and comparisons of observed global surface temperatures over the 20th Century and of those hindcast by the climate models used by the Intergovernmental Panel on Climate Change (IPCC) in their 4thAssessment Report (AR4).

 

INTRODUCTION

The closing comments of the first part of this series read:

The IPCC, in AR4, acknowledges that there were two epochs when global surface temperatures rose during the 20th Century and that they were separated by an epoch when global temperatures were flat, or declined slightly. Yet the forced component of the models the IPCC elected to use in their hindcast discussions rose at a rate that is only one-third the observed rate during the early warming period. This illustrates one of the many failings of the IPCC’s climate models, but it also indicates a number of other inconsistencies with the hypothesis that anthropogenic forcings are the dominant cause of the rise in global surface temperatures over the 20th Century. The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. Additionally, since the observed trends of the early and late warming periods during the 20th Century are nearly identical, and since the trend of the forced component of the models is nearly three times greater during the latter warming period than during the early warming period, the data also indicate that the additional anthropogenic forcings that caused the additional trend in the models during the latter warming period had little to no impact on the rate at which observed temperatures rose during the two warming periods. In other words, the climate models do not support the hypothesis of anthropogenic forcing-driven global warming; they contradict it.

In this post, using the “ENSO fit” and “volcano fit” data from Thompson et al (2009), the observations and the model mean data are adjusted to determine if there was any impact of volcanic aerosols and El Niño and La Niña events on the trend comparisons during the four epochs (two warming, two cooling) of the 20thCentury. In another set of comparisons, the HADCRUT observations are replaced with the mean of HADCRUT3, GISS LOTI, and NCDC land-plus-ocean surface temperature anomaly datasets, just to assure readers the disparities between the models and the observations are not a function of the HADCRUT surface temperature observations dataset that was selected by the IPCC. And model projections and observations for global sea surface temperature (SST) anomalies will be compared, but the comparisons are extended back to 1880 to also see if the forced component of the models matches the significant drop in global sea surface temperatures from 1880 to 1910. For these comparisons, the average SST anomalies of five datasets (HADISST, HADSST2, HADSST3, ERSST.v3b, and Kaplan) are used.

But there are two other topics to be discussed before addressing those.

CLARIFICATION ON THE USE OF THE MODEL MEAN

Part 1 provided the following discussion on the use of the mean of the climate model ensemble members.

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

The first quote is from a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS) on the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed the question, “If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?” Gavin Schmidt replied:

“Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).”

That quote from Gavin Schmidt will serve as the basis for our use of the IPCC multi-model ensemble mean in the linear trend comparisons that follow the IPCC quotes. As I noted in my recent video The IPCC Says… Part 1 (A Discussion About Attribution), in the slide headed by “What The Multi-Model Mean Represents”, Basically, the Multi-Model (Ensemble) Mean is the IPCC’s best guess estimate of the modeled response to the natural and anthropogenic forcings. In other words, as it pertains to this post, the IPCC model mean represents the (naturally and anthropogenically) forced component of the climate model hindcasts. (Hopefully, this preliminary discussion will suppress the comments by those who feel individual models runs need to be considered.)

HHHHHHHHHHHHHHHHHHHHHHHHHHHH

Gavin Schmidt’s use of the word noise resulted in a number of discussions on the thread of the cross post at WattsUpWithThat. There blogger Philip Bradley provided a quote from the National Center for Atmospheric Research (NCAR) Geographic Information Systems (GIS) Climate Change Scenarios webpage. The quote also appears on the NCAR GIS Climate Change Scenarios FAQ webpage:

“Climate models are an imperfect representation of the earth’s climate system and climate modelers employ a technique called ensembling to capture the range of possible climate states. A climate model run ensemble consists of two or more climate model runs made with the exact same climate model, using the exact same boundary forcings, where the only difference between the runs is the initial conditions. An individual simulation within a climate model run ensemble is referred to as an ensemble member. The different initial conditions result in different simulations for each of the ensemble members due to the nonlinearity of the climate model system. Essentially, the earth’s climate can be considered to be a special ensemble that consists of only one member. Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.”

So, Gavin Schmidt basically used “noise” in place of “variations of the individual ensemble members ‘due to the nonlinearity of the climate model system’”. Noise is much quicker to write. Gavin also used “realisation” instead of “ensemble member”.

In summary, by averaging of all of the ensemble members of the numerous climate models available to them, the IPCC presented what they believe to be the “best representation of a scenario,” as created by the natural and anthropogenic forcings that served as input to the climate models. And again, as it relates to this post, the multi-model ensemble mean represents the (naturally and anthropogenically) forced component of the climate model hindcasts of the 20thCentury.

NOTE ABOUT BASE YEARS

The base years for anomalies of 1901 to 1950 are still being used. Those were the base years selected by the IPCC for their Figure 9.5 in AR4.

A MORE BASIC DESCRIPTION OF WHY THE INSTRUMENT TEMPERATURE RECORD AND CLIMATE MODELS CONTRADICT THE HYPOTHESIS OF ANTHROPOGENIC GLOBAL WARMING

In part 1, we established that the IPCC accepts that Global Surface Temperatures rose during two periods in the 20thCentury, from 1917 to 1944, and from 1976 to 2000. The two periods were separated by a period when global surface temperatures remained relatively flat or dropped slightly, from 1944 to 1976. The IPCC in AR4 used the Hadley Centre’s HADCRUT3 global surface temperature data in their comparisons with the model hindcasts. During the two warming periods, the instrument-based observations of global surface temperatures rose at the same rate, Figure 1, at approximately 0.175 deg C per Decade.

Figure 1

Climate Models, on the other hand, do not recreate the rate at which global surface temperatures rose during the early warming period. They do well during the late 20th Century warming period, but not the early one. Why? Because Climate Models use what are called forcings as inputs in order to recreate (hindcast) the global surface temperatures during the 20th Century. The climate models attempt to simulate many climate-related processes, as they are programmed, in response to those forcings, and one of the outputs is global surface temperature. Figure 2, as an example, shows the effective radiative forcings employed by the Goddard Institute of Space Studies (GISS) for its climate model simulations. Refer to the Forcing in GISS Climate Model webpage.

Figure 2

GISS also provides the datathat represents the Global Mean Net Forcing of all of those individual forcings. Shown again as an example in Figure 3, there is a significant difference in the trends of the forcings during the early and late warming periods. (Note: GISS has updated the forcing data recently, so the data may have been slightly different when the simulations were performed for CMIP3 and the IPCC’s AR4.)

Figure 3

The GISS Model-ER is one of the many climate models submitted to the archive called CMIP3 from which the IPCC drew its climate simulations for AR4. Figure 4 shows the individual ensemble members and the ensemble mean for the GISS Model-ER global surface temperature hindcasts of the 20thCentury. Basically, GISS ran their climate model 9 times with the climate forcings shown above and those model runs generated the 9 global surface temperature anomaly curves illustrated by the ensemble members. Also shown are the trends of the GISS Model-ER ensemble mean during the early and late warming periods. The difference between the trends of the model ensemble mean during the early and late warming period is not as great as it was for the forcings, but the trend of the ensemble mean (the forced component of the GISS Model-ER) during the late warming period is about twice the trend for the early warming period. According to observations, however, Figure 1, they should be the same.

Figure 4

For their global surface temperature comparisons in Chapter 9 of AR4, the IPCC included the ensemble members from 11 more climate models in its model mean. And as illustrated in Figure 5, there is a significant disparity between the trends of the model mean during the early warming period and the late warming period. The ensemble mean during the late warming period warmed at a rate that is about 2.9 times faster than the trend of the early warming period—but they should be the same.

Figure 5

So in summary, for our examples, the net forcings of the GISS climate models rose at a rate that was approximately 3.8 times higher during the late warming period than it was during early warming period, as shown in Figure 3. And let’s assume, still for the sake of example, that the model forcings for the other models were similar to those used by GISS. Then the increased trend in the forcings during the late warming period, Figure 5, caused the model mean to warm almost 2.9 times faster in the late warming period than during the early warming period. But in the observed, instrument-based data, Figure 1, global surface temperatures during the early and late warming periods warmed at the same rate. This clearly indicates that, while the trends of the models during the early and late warming periods are dictated by the natural and anthropogenic forcings that serve as inputs to them, the rates at which observed temperatures rose are not dictated by the forcings. And as discussed in part 1, under the heading of ON THE IPCC’S CONSENSUS (OR LACK THEREOF) ABOUT WHAT CAUSED THE EARLY 20th CENTURY WARMING, the IPCC failed to provide a suitable explanation for why the models failed to rise at the proper rate during the early warming period. The bottom line: the differences between the modeled and the observed rises in global surface temperatures during the two warming periods acknowledged by the IPCC actually contradicts the hypothesis of anthropogenic global warming.

ENSO- AND VOLCANO-ADJUSTED OBSERVATIONS AND MODEL MEAN GLOBAL SURFACE TEMPERATURE DATA

I’ve provided this discussion in case there are any anthropogenic global warming proponents who are thinking the additional wiggles in the instrument data caused by the El Niño and La Niña events are causing the disparity between the models and observations during the early warming period. I’m not sure why anyone would think that would be the case, but let’s take a look anyway. We’ll also adjust both datasets for the effects of the volcanic aerosols, and we’ll be adjusting the model and observation-based datasets for the volcanoes by the same amount. To make the El Niño-Southern Oscillation (ENSO) and volcanic aerosol adjustments, we’ll use the “ENSO fit” and “Volcano fit” datasets from the Thompson et al (2008) paper “Identifying signatures of natural climate variability in time series of global-mean surface temperature: Methodology and Insights.Thompson et al (2009) used HADCRUT3 global surface temperature anomalies, just like the IPCC in AR4, so that’s not a concern. Thompson et al (2009) described their methods as:

“The impacts of ENSO and volcanic eruptions on global-mean temperature are estimated using a simple thermodynamic model of the global atmospheric-oceanic mixed layer response to anomalous heating. In the case of ENSO, the heating is assumed to be proportional to the sea surface temperature anomalies over the eastern Pacific; in the case of volcanic eruptions, the heating is assumed to be proportional to the stratospheric aerosol loading.”

The Thompson et al method assumes global temperatures respond proportionally to ENSO, but even though we understand this to be wrong, we’ll use the data they supplied. (More on why this is wrong later in this post.) Thompson et al (2009) were kind enough to provide data along with their paper. The instructions for use and links to the data are here.

During the late warming period, Figure 6, and the mid-century “flat temperature” period, Figure 7, the trends of the volcano-adjusted Multi-Model Ensemble Mean (the forced component of the models) are reasonably close to the trends of the ENSO- and volcano-adjusted observed global surface temperature anomaly data. During the late warming period, Figure 6, the models slightly underestimate the warming, and during the mid-century “flat temperature” period, Figure 7, the models slightly overestimate the warming. However, as with the other datasets presented in Part 1, the most significant differences show up in the early warming period and the early “flat temperature” period. The trend of the ENSO- and volcano-adjusted global surface temperature anomalies during the early warming period, Figure 8, are about 3.3 times higher than the trend of the volcano-adjusted model data. And during the early “flat temperature” period, Figure 9, the trend of the observation-based data is slightly negative, while the model mean shows a significant positive trend.

Figure 6

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 7

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 8

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 9

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Adjusting the data for ENSO events and volcanic eruptions does not help to cure the ills of the climate models.

USING THE AVERAGE OF GISS, HADLEY CENTRE, AND NCDC GLOBAL SURFACE TEMPERATURE ANOMALY DATA

The IPCC chose to use HADCRUT3 Global Surface Temperature anomaly data for their comparison graph of observational data and model outputs in Chapter 9 of AR4. If we were to replace the HADCRUT3 data with the average of HADCRUT3, GISS Land-Ocean Temperature Index (LOTI) and NCDC Land+Ocean Temperature anomalies, would the model mean better agree with the observations? The trends of the late warming and mid-century “flat temperature” epochs still agree well, and trends of the early warming and early “flat temperature” periods still disagree, as illustrated in Figures 10 through 13.

Figure 10

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 11

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 12

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 13

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

So the failure of the models is not dependent on the HADCRUT data.

SEA SURFACE TEMPERATURES – THE EARLY DIP AND REBOUND

When I first started to present Sea Surface Temperature anomaly data at my blog, I used the now obsolete ERSST.v2 data, which was available at that time through the NOAA NOMADS website. What I always found interesting was the significant dip from the 1870s to about 1910, Figure 14, and then the rebound from about 1910 to the early 1940s. Global Sea Surface Temperature Anomalies in the late 1800s were comparable to those during the mid 20thCentury “flat temperature” period.

Figure 14

NOTE: I wrote a post about that dip and reboundback in November 2008. The only reason I refer to it now is to call your attention to the first blogger to leave a comment on that thread. That’s John Cook of SkepticalScience. His explanations about the dip and rebound didn’t work then, and they don’t work now. But back to this post…

That dip and rebound exists to some extent in all current Sea Surface Temperature anomaly datasets, more so in the ERSST.v3b and HADSST2 datasets, and less so in the HADSST3, HADISST, and Kaplan datasets. Refer to Figure 15.

Figure 15

So how well do the model mean of the forcing-driven climate models compare with the long-term variations in Global Sea Surface Temperature anomalies? We’ll use the average of the long-term Sea Surface Temperature datasets that are available through the KNMI Climate Explorer, excluding the obsolete ERSST.v2. The datasets included are ERSST.v3b, HADISST, HADSST2, HADSST3, and Kaplan. And you will note in the graphs that the number of models has decreased from 12 to 11. TOS (Sea Surface Temperature) data for the MRI CGCM 2.3.2 was not available through the KNMI Climate Explorer. This reduces the ensemble members by 5 or about 10%, which should have little impact on these results, as you shall see. And you’ll also note that the years of the changeover from cooling to warming epochs and vice versa are different with the sea surface temperature data. The changeover years are 1910 (instead of 1917), 1944, and 1975 (instead of 1976).

As one would expect, the forced component of the models (the model mean) does a reasonable job of hindcasting the trend in sea surface temperatures during the late warming period, Figure 16, and also during the mid-century “flat temperature” period, Figure 17. The trend of the model mean during the early warming period, Figure 18, however, is only about 33% of the observed trend in the mean of the global surface temperature anomaly datasets. That failing is similar to the land-plus-sea surface temperature data. And then there’s the early cooling period, the dip of the dip and rebound, Figure 19. The model mean shows a slight warming during that period, while the observed Sea Surface Temperature anomaly mean has a significant negative trend. Yet another failing of the models.

Figure 16

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 17

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 18

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

Figure 19

HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH

THE IMPACT OF THE 1945 DISCONTINUITY CORRECTION

If you were to scroll up to the Sea Surface Temperature dataset comparison, Figure 15, you’ll note how the HADSST3 data is the only Sea Surface Temperature anomaly dataset that has been corrected for the 1945 discontinuity, which was presented in the previously linked paper Thompson et al (2009). Raising the Sea Surface Temperature anomalies during the initial years of the mid-century flat temperature period has a significant impact on the observed linear trend for that epoch. And as one would expect, the trend of the model mean no longer comes close to agreeing with the HADSST3 data during the mid-century “flat temperature” period, because the observed temperature anomalies are no longer flat, as illustrated in Figure 20.

Figure 20

ENSO INDICES DO NOT REPRESENT THE PROCESS OF ENSO

Earlier in the post I noted that Thompson et al (2009) had assumed global temperatures respond proportionally to ENSO, and that that assumption was wrong. I have been illustrating that fact in numerous ways in dozens of posts over the past (almost) three years. The most recent discussions appeared in the following two-part series that I wrote at an introductory level:

ENSO Indices Do Not Represent The Process Of ENSO Or Its Impact On Global Temperature

AND:

Supplement To “ENSO Indices Do Not Represent The Process Of ENSO Or Its Impact On Global Temperature”

DO OBSERVATIONS AND CLIMATE MODELS CONFIRM OR CONTRADICT THE HYPOTHESIS OF ANTHROPOGENIC GLOBAL WARMING?

Just in case you missed the obvious answer to the title question of this two-part post, the answer is they contradict the hypothesis of anthropogenic global warming. The climate models presented by the IPCC in AR4 show how global surface temperatures should have risen during the 20th Century if surface temperatures were driven by natural and by anthropogenic forcings. As illustrated in Figure 5, the climate models show that surface temperatures during the late 20th Century warming period, from 1976 to 2000, should have risen at a rate that was approximately 2.9 higher than the rate at which they warmed during the early warming period of 1917 to 1944. But, as shown in Figure 1, the observed rates at which global temperatures rose during the two warming periods of the 20thCentury were the same, at approximately 0.175 deg C/decade.

CLOSING

In this post we illustrated that…

1. regardless of whether we adjust global surface temperature data for ENSO and volcanic aerosols,

2. regardless of whether we use the global surface temperature dataset presented by the IPCC in AR4 (HADCRUT3) or use the average of the GISS, Hadley Centre, and NCDC datasets, and

3. regardless of whether we examine global land-plus-sea surface temperature data or only global sea surface temperature data

…the model mean (the forced component) of the coupled ocean-atmosphere climate models selected by the IPCC for presentation in their 4thAssessment Report CANNOT reproduce:

1. the rate at which global surface temperatures fell during the early 20thCentury “flat temperature” period, or

2. the rate at which global surface temperatures warmed during the early 20thCentury warming period.

The model mean (the forced component) of those same climate models CANNOT reproduce the rate at which global surface temperatures fell during the mid-20thCentury “flat temperature” period if the Sea Surface Temperature data during that period have been corrected for the “1945 discontinuity” discussed in the paper Thompson et al (2009).

As illustrated and discussed in parts 1 and 2 of this post, global surface temperatures can obviously warm and cool over multidecadal time periods at rates that are far different than the forced component of the climate models used by the IPCC. This indicates that those variations in global surface temperature, which can last for 2 or 3 decades, or longer, are not dependent on the forcings that were prepared solely to make the climate models operate. What then is the purpose of using those same models, based on assumed future forcings, to project climate decades and centuries out into the future? The forcings-driven climate models have shown no skill whatsoever at replicating the past, so why is it assumed they would be useful when projecting the future?

ABOUT: Bob Tisdale – Climate Observations

SOURCES

NOTE: The Royal Netherlands Meteorological Institute (KNMI) recent revised the security settings of their Climate Explorer website. You will likely have to log in or registerto use it. For basic information on the use of this valuable tool, refer to the post Very Basic Introduction To The KNMI Climate Explorer.

The sea surface temperature and combined land+sea surface temperature datasets are found at the Monthly observationswebpage of the KNMI Climate Explorer, and the model data is found at their Monthly CMIP3+ scenario runswebpage.

For the Global HADSST2 data, I used the data available through the UK Met Office website, specifically the annual global time-series data that is found at this webpage, then changed the base years for the anomalies to 1901-1950.

About these ads

74 thoughts on “Tisdale on Climate Models Confirming Or Contradicting AGW

  1. One possible take is that the list of “forcing” candidates is incomplete. A “failure of imagination”??

    Another possible (or complementary) take is that the mechanisms in the models do not replicate real-world physics adequately. Scaling and grid size come to mind as possible mismatch causes.

  2. Bob Tisdale, single-handedly, makes more sense to me than all of the combined efforts of the thousands of scientists involved in the AGW project. He doesn’t prove conclusively that the theory is wrong – only time can do that – but he certainly exposes the inadequacies of the logic and the tools they use.

  3. Lets sum it up: the whole orthodox climate science is just wanking on 1975-2005 warm AMO trend, parametrizing their cutting-edge-state-of-art-coupled-playstation-models on it (“look it fits well”) and camouflaging previous climatic history with straight hockey stick.

  4. I have a problem with the models that you don’t discuss. They all deal in the global temperture anomaly, which is a statistic, not a physical property of the earth’s atmosphere. A realistic physical model should predict actual phyical quantities not a made up statistic that has no physical meaning.

    How do all these models stack up when they are asked to reproduce physical temperature maps of the climate? I’ve seen a few maps pulled out of the IPCC reports and they look terrible, some of them off by many degees centigrade.

  5. Bob Tisdale:

    So glad to see you summarizing your important conclusions at the end of your reports. It really helps tie it all together. Thanks! GK

  6. There is no heat transfer physics in these climate models that relates the ‘radiative forcing constants’ to the ‘surface temperature’. The whole approach is based on the empirical assumption that an increase in the ‘radiative forcing ‘ must cause an increase in ‘surface temperature’. The ‘calibration constant’ is still derived from the ‘hockey stick’. It is all just empirical pseudoscience hidden under a lot of fancy graphics and fluid dynamics. There is no such thing as a ‘climate equilibrium state’. Once the real dynamics of the surface energy transfer are included, the whole CO2 induced global warming issue disaapears into the short term (hourly) changes in the surface flux. It is time to shut down these climate astrology models and get back to some real climate physics. What part of fraud don’t they understand?

  7. Roy Clark says:
    December 12, 2011 at 10:43 am

    ‘Ere, ‘ere! That’s a real “Hear! Hear!”-worthy posting. The obscene joke that is Mona Loa comes to mind …

  8. Interesting the charts stop at 2000. I’d love to see a continuation of the comparison out to current times, where temperatures have flattened.

  9. Well leave it to Bob to give us at least a good weekend’s worth of reading and digestion, to try and take in some of the mountain of information (data) here; not to mention the explanations. Thanks Bob, I really needed some work.

    I’m glad you gave us the answer early, but that didn’t satisfy my curiosity to proceed, and try and learn something from the data you present.

    It has always seemed to me that if you have a possible answer as to why something happened; then having two possible answers just reduces your odds of knowing what happened and why.

    When I read (some places) that there might be as many as 13 or so “climate models” aka GCMs, that leads me to suggest that in reality, there is NO model of earth’s climate.

    That doesn’t surprise me, since as near as I can tell, there is no valid set of global climate data samples, that complies with the most basic rules for sampled data sytems; ergo we don’t even know what earth’s climate really is; just local anecdotal reports of what might be happening locally.

    It would seem to me that the fundamental task of modelling, is to represent some phenomenon by a compact analog that contains fewer arbitrary constants or parameters to be determined, than the number of observed data points.

    It’s not surprising to me, that we can’t model something that we can’t even observe adequately.

    It has been over 20 years since James Hansen invented man made global warming, and told the US Congress about it; so I would expect that some part of his 100 year forecast would already be observed climate history. As near as I can discern from folks who watch this stuff, no such thing has occurred; or seemed to have occurred.

    Notwithstanding any of that, I’m as curious to plow into Bob’s massive amount of information here, to try and learn some of the types of data, that people are trying to gather.

    A Nobel Prize winning Physicist (one of the real ones) told me a couple of months ago, when I queried him for his thoughts on “string theory”; that the more untestable one’s “theories” might be, the more outrageous the claims could be; because nobody is going to check up on you.

    Wasn’t it Einstein himself, who said that a single contrary experimental result (fact checked) was sufficient to scrap a theory that had survived countless agreeable tests.

    So what is the point of a theory that is known, a priori to be untestable; “no matter what”, as Dr William Schockley would have put it. At best, that is nonscience; and more likely nonsense.

    So if you are into string theory, and/or parallel universes, or even intelligent life in THE universe, what are you going to tell your grandchildren on your death bed, that you did for the good of mankind ??

  10. “”I have removed this guest post [by Shub Niggurath] because it has been brought to my attention that it is unfair and has caused inflamed reactions [especially in comments] that were unintended. It was my mistake for posting it without seeing this, and my decision to remove it. – Anthony Watts””

    Hiding mistakes is not a good look Mr Watts.

    REPLY: There’s no “hiding”, the post is still available here at the two places that preceded WUWT carrying it.

    http://thegwpf.org/best-of-blogs/4536-pielke-jr-the-climate-debate-is-over.html

    and here

    http://nigguraths.wordpress.com/2011/12/11/pielke-climate-debate-over/

    I took it down because it was clear in retrospect that it was unfair, and made a clear note about it. – Anthony

  11. Paul Linsay says “I have a problem with the models that you don’t discuss. They all deal in the global temperture anomaly, which is a statistic, not a physical property of the earth’s atmosphere. ”

    I’ve looked at this on my web site at:

    http://www.climatedata.info/Temperature/Temperature/simulations.html

    This shows that the average global temperature from different models varies from 12.6 to 14.2 C.
    I’ve also done the same with precipitation at:

    http://www.climatedata.info/Precipitation/Precipitation/global.html

    In the case of precipitation the difference between different models is around 100 mm/year.

  12. George E. Smith; says:
    December 12, 2011 at 11:20 am

    It has always seemed to me that if you have a possible answer as to why something happened; then having two possible answers just reduces your odds of knowing what happened and why.

    A man with one watch always knows the time; a man with two watches is never sure.

  13. markus says:
    December 12, 2011 at 11:23 am
    ‘Hiding mistakes is not a good look Mr Watts.’

    You guys should know, as we’ve discovered in spades. Cheap ad hom, the usual trollish tosh.

  14. This statement from the summary in part one:

    “The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. ”
    ___

    No, this is incorrect. What it indicates is that the models failed to fully capture the dynamics of both the natural and anthropogenic forcings. If you take away natural and anthropogenic forcings, what else is there that can alter the climate? Climate is not a random walk, nor does it exist in a state of quantum uncertainty. Even what is commonly called “noise” is noise only relative to the signal you’re looking for. ENSO events, for example, can be considered as short-term noise if you’re looking at the longer-term Milankovitch signal, but ENSO is hardly random, but has real physical causes. Thus, every climate change has a real cause or causes (i.e. a real forcing) behind it. The failure of models is their failure to fully capture the dynamics of the forcings, but those forcings still exist. Hence the edict (that even Trenberth has agreed to many times over): Models are never true (i.e. they never fully capture reality in every detail), so they should not be judged by this metric, but in accordance with how useful they are by capturing enough of the dynamics to have some predictive ability.

  15. True, but a man with two or more watces can make two or more arguements (depending on what is needed) concerning what time it is.

  16. Roy Clark says:
    December 12, 2011 at 10:43 am
    “There is no heat transfer physics in these climate models that relates the ‘radiative forcing constants’ to the ‘surface temperature’. The whole approach is based on the empirical assumption that an increase in the ‘radiative forcing ‘ must cause an increase in ‘surface temperature’.”

    Try the following:
    The whole approach is based not on empirical fact or confirmed hypothesis but on the unjustified assumption that an increase in the ‘radiative forcing ‘ must cause an increase in ‘surface temperature’.”

  17. I am somewhat amused by the use of climate models for predicting the future. Regardless of how “good” models may be, it is well recognized that models have little, if any, predictive value. I would like to quote the abstract and conclusion from a paper by Carter, et. al. (Carter, et. al., “Our Calibrated Model Has No Predictive Value”, Sensitivity Analysis of Model Output, Los Alamos National Laboratory, 2005):
    Abstract: It is often assumed that once a model has been calibrated to measurements then it will have some level of predictive capability, although this may be limited. If the model does not have predictive capability then the assumption is that the model needs to be improved in some way.
    Using an example from the petroleum industry, we show that cases can exist where calibrated models have no predictive capability. This occurs even when there is no modelling error present. It is also shown that the introduction of a small modelling error can make it impossible to obtain any models with useful predictive capability.
    We have been unable to find ways of identifying which calibrated models will have some predictive capacity and those which will not.
    In summary: in the absence of model errors, and with very low measurement errors, it is possible to obtain calibrated models that do not have any predictive capability; such models may be significantly easier to identify than the correct model; we are unable to differentiate between calibrated models with or without predictive capabilities; the introduction of even small model errors may make it impossible to obtain a calibrated model with predictive value.
    In this analysis there is nothing that seems to be unique to this model. In particular there is the issue of data availability, adding more measurements does not appear to offer a guaranty of avoiding this dilemma. If the observations made with this model are not unique to the model, and we have no reason to believe that the model is unique, then this presents a potentially serious obstacle to the use of models of this type for prediction.
    Our concern is that if we cannot successfully calibrate and make predictions with a model as simple as this, where does this leave us when are models are more complex, have substantive modelling errors, and we have poor quality measurement data.
    In summary: in the absence of model errors, and with very low measurement errors, it is possible to obtain calibrated models that do not have any predictive capability; such models may be significantly easier to identify than the correct model; we are unable to differentiate between calibrated models with or without predictive capabilities; the introduction of even small model errors may make it impossible to obtain a calibrated model with predictive value.
    In this analysis there is nothing that seems to be unique to this model. In particular there is the issue of data availability, adding more measurements does not appear to offer a guaranty of avoiding this dilemma. If the observations made with this model are not unique to the model, and we have no reason to believe that the model is unique, then this presents a potentially serious obstacle to the use of models of this type for prediction.
    Our concern is that if we cannot successfully calibrate and make predictions with a model as simple as this, where does this leave us when are models are more complex, have substantive modelling errors, and we have poor quality measurement data.
    Models are trained, or calibrated, over a range of values for each input, ie independent variables, and are thus valid only over the range of those values. Whenever the value of one or more variables exceeds the range the output of the model is not valid. The reason is that we have no idea how the model will behave outside the limits of the training data set. An illustration of this property is to take a periodic function, such as a sine wave, and fit a section of about one cycle of that function to a polynomial. Inside the selected range the “model” behaves reasonably well, but outside it eventually shoots off to infinity.
    We only have climate data that includes values for the level of CO2 of less than 400 ppm, which means the climate model can only be calibrated to that point. To then run the model with values in excess of 400 ppm is non-sense. We must recognize the limitations of modeling.

  18. Ged says: “Interesting the charts stop at 2000. I’d love to see a continuation of the comparison out to current times, where temperatures have flattened.”

    Sorry that I didn’t discuss why the graphs stopped in 2000. I had noted in Part 1 of the post that most of the 20th Century hindcasts prepared for CMIP3 ended in 1999 and 2000, and that for their graphs in AR4, the IPCC spliced on 5 or 6 years of the corresponding projections in order to get their graphs to extent to 2005. I didn’t feel that all of that data handling would have added to the post since the major problems were in the early part of the century. I have, however, discussed the more recent years in other posts like:

    http://bobtisdale.wordpress.com/2011/11/22/satellite-era-sst-anomalies-models-vs-observations-using-time-series-graphs-and-17-year-trends/

  19. Models aren’t problem, a good scientist can dismiss whole concoction of mixed-in junk.
    Reality is the problem, those who try to dismiss it will find that it comes back to haunt as a bad nightmare.

  20. R. Gates says:
    December 12, 2011 at 11:55 am <blockquote) Even what is commonly called “noise” is noise only relative to the signal you’re looking for. Patently false. R. This whole thread is talking about modeling climate, regardless of the source of information. You are trying to tear it apart by claiming something that is absurd. If any group is guilty of ignoring climate signal, it is those “climate scientist” modelers that can’t or won’t include significant components into their models.

    Then you say:

    Models are never true (i.e. they never fully capture reality in every detail), so they should not be judged by this metric, but in accordance with how useful they are by capturing enough of the dynamics to have some predictive ability.

    And this is patently true, R. Problem is, the models have no ability to hindcast (unless they just regurgitate force-fed data–now there’s your “forcing”), so there is little reason to believe they have any ability to forecast (and hindcasting should always be easier than forecasting). So based on your own admission (“…to have some predictive ability”), they are of no consequence.

    Asserting you have a climate model that is sufficiently predictive that it supports some future catastrophic tipping point to justify the immense destructive action on the West embodied by COP 17 is downright criminal.

  21. R. Gates says:
    December 12, 2011 at 11:55 am

    Even what is commonly called “noise” is noise only relative to the signal you’re looking for.

    Patently false. R. This whole thread is talking about modeling climate, regardless of the source of information. You are trying to tear it apart by claiming something that is absurd. Besides, if any group is guilty of ignoring climate signal, it is your “climate scientist” modelers that can’t or won’t include significant components into their models.

    Then you say:

    Models are never true (i.e. they never fully capture reality in every detail), so they should not be judged by this metric, but in accordance with how useful they are by capturing enough of the dynamics to have some predictive ability.

    And this is patently true, R. Problem is, the models have no ability to hindcast (unless they just regurgitate force-fed data–now there’s your “forcing”), so there is little reason to believe they have any ability to forecast (and hindcasting should always be easier than forecasting). So based on your own admission (“…to have some predictive ability”), they are of no consequence.

    Asserting you have a climate model that is sufficiently predictive that it supports some future catastrophic tipping point to justify the immense destructive action on the West promulgated by COP 17 is downright criminal.

  22. The bottom line: the differences between the modeled and the observed rises in global surface temperatures during the two warming periods acknowledged by the IPCC actually contradicts the hypothesis of anthropogenic global warming.
    </blockquote

    Which is why every time one of these discredited climatologists appears someplace, they need to be put on the spot and asked publicly why the early 20th century warming is not supposedly anthropogenic and the late 20th century warming is and why they are both nearly identical in rate and duration yet "the models" can explain only one of them.

    This issue needs to be hammered over and over again. People need to be shown that there is nothing unprecedented or even unusual about the recently concluded late 20th century warming.

  23. Thank you very much Bob for your excellent artice. For years I have looked in amazement at the attempts of the climate modelers. It seems to me that some climate modelers, whether they know it or not, have been attempting to create a computerized climate Oracle, one whose pronouncements cannot be questioned. I think that computer models can be useful in educating us about some of the physical processes that lead to phenomena we observe in weather, but to try to attempt at this time with the data that we have, and the computer power available, is along the lines of a fools task. We should be working dilligently on the many physics input modules to the weather models, but in stead much time is spent arguing about a half wit Oracle that has few input facts, and is sometimes even fed lies. It seems to me that at this time all we can do is to try to make some semi quantitative sense of what we have observed in a short time scale slightly longer that weather.

  24. R. Gates says: In response to my statement in the post, “The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. ”

    You wrote, “No, this is incorrect…”

    Actually, it’s quite correct. And anyone who has studied the processes that contributed to the variations in the instrument temperature record understands why it’s correct.

    You concluded with, “Hence the edict (that even Trenberth has agreed to many times over): Models are never true (i.e. they never fully capture reality in every detail), so they should not be judged by this metric, but in accordance with how useful they are by capturing enough of the dynamics to have some predictive ability.”

    Your argument does not hold up very well. If the models cannot be judged by the metric of truth because “they never fully capture reality in every detail,” there is no way to determine “how useful they are [at] capturing enough of the dynamics to have some predictive ability.” In other words, if the models have shown no skill at reproducing the past because they have not captured “enough of the dynamics”, then certainly it cannot be assumed that they have captured “enough of the dynamics” to be useful at projecting the future.

  25. Concerning the radiative forcings graphic. Note the 2 large negative forcings, tropospheric aerosols and aerosol indirect effect (which is aerosol seeded clouds).

    Their effect in the models is to reduce the GHG forcing by about 2/3rds and make the model output in line with actual temperatures, at least in the 1970-2000 period.

    There are multiple issues with these 2 negative forcings, with numerous papers asserting the values of the forcings are likely wrong and the actual effects of aerosols on climate highly uncertain.

    In the models they are used as ‘fiddle factors’ to get more or less the right answer for the 1970-200 period.

    What Bob shows above is that the Forcings model (even with the dubious aerosol fiddle factors) is incapable of predicting 20th century temperatures. There are only 2 explanations for this. One is that the Forcings model is wrong. The other is that the temperature record contains large errors. Of course, both could be true.

    When I say the Forcings model is wrong, I mean the theory itself is wrong, irrespective of the accuracy or otherwise of the climate models.

    Bob, you might want to replace the word ‘epoch’. Like many english words it has multiple meanings, but is usually used to mean ‘an instant of time’.

  26. Bob you cannot simply
    compare model means to observations with first accounting for the differences in coverage.

    That is, when you look at GISS or HADCRUT or any other index for the early years you
    should realize that the spatial sample of those series is overweighted to the northern latitudes.
    Since northern latitudes trends are higher than mid latitude and lower latitude trends, the figures we see in the 1900-1940 period is likely to be an over estimate of the actual global warming.
    The model means are derived from a full global average. That is, all lats and lons are sampled.
    So you cannot simply compare the observations ( which overweight the northern latitudes) to
    the model means which use the entire globe

    To compare model means with observation means especially over early periods you have to adopt the following methodology. You, have to take the model mean over the same grids that go into the observational mean. Then you have a valid comparison. The proceedure is laid out in chapert 9 of Ar4 ( maybe in the SI )

  27. Brian H.: I actually composed the post in Word, but when I pasted it something happened in addition to losing all the formatting. Somehow the summary was inserted twice. My apologies.

  28. R. Gates says:
    “If you take away natural and anthropogenic forcings, what else is there that can alter the climate?”

    Nice misdirect. Any particular temperature reading is not “climate”, but rather weather. Even a year’s worth of averages is not “climate”, I would consider a decade of records as indicating climatic norms (although I hear we’re stretching that out to 17 years due to climate cooperation issues these days). So, to rephrase your question, if you take away all forcings (other than super-natural), what else is there that can alter the global average temperature? Well, to name just one, thermal inertia variations within the various mediums of the system as a whole would still be manifested in global average temperature variations. In other words, if we were able to hold all forcings completely constant there would still be variations in global average temperature due to the enormous complexity and variability of the way heat moves through the various sub-systems of the super-system that is almost laughably what we call Earth’s Climate as if it were as simple as a air conditioning system.

    “ENSO is hardly random, but has real physical causes.”

    Please list, starting with why the trade winds weaken.

  29. “The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. ”

    Bob is correct in this statement, which is the crux of the issue.

    It doesn’t matter how well the the climate models predict the 1970-2000 warming, because Bob shows there are periods of comparable length where the models have no predictive value.

    Thus there is no evidence that the models have any predictive value over any time period.

  30. I didn’t know there was a discipline on forecasting methods which had established principles, until I read this:

    GLOBAL WARMING: FORECASTS BY SCIENTISTS
    VERSUS SCIENTIFIC FORECASTS
    by
    Kesten C. Green and J. Scott Armstrong

    http://www.forecastingprinciples.com/files/WarmAudit31.pdf

    “ABSTRACT
    In 2007, the Intergovernmental Panel on Climate Change’s Working Group One, a
    panel of experts established by the World Meteorological Organization and the
    United Nations Environment Programme, issued its Fourth Assessment Report.
    The Report included predictions of dramatic increases in average world
    temperatures over the next 92 years and serious harm resulting from the predicted
    temperature increases. Using forecasting principles as our guide we asked: Are
    these forecasts a good basis for developing public policy? Our answer is “no”.
    To provide forecasts of climate change that are useful for policy-making, one
    would need to forecast (1) global temperature, (2) the effects of any temperature
    changes, and (3) the effects of feasible alternative policies. Proper forecasts of all
    three are necessary for rational policy making.
    The IPCC WG1 Report was regarded as providing the most credible long-term
    forecasts of global average temperatures by 31 of the 51 scientists and others involved
    in forecasting climate change who responded to our survey. We found no references
    in the 1056-page Report to the primary sources of information on forecasting methods
    despite the fact these are conveniently available in books, articles, and websites. We
    audited the forecasting processes described in Chapter 8 of the IPCC’s WG1 Report
    to assess the extent to which they complied with forecasting principles. We found
    enough information to make judgments on 89 out of a total of 140 forecasting
    principles. The forecasting procedures that were described violated 72 principles.
    Many of the violations were, by themselves, critical.
    The forecasts in the Report were not the outcome of scientific procedures. In
    effect, they were the opinions of scientists transformed by mathematics and
    obscured by complex writing. Research on forecasting has shown that experts’
    predictions are not useful in situations involving uncertainly and complexity. We
    have been unable to identify any scientific forecasts of global warming. Claims that
    the Earth will get warmer have no more credence than saying that it will get colder.”

    How come the IPCC missed this relevant body of science knowledge out of their mix..?

  31. RichieP says:
    December 12, 2011 at 11:48 am

    markus says:
    December 12, 2011 at 11:23 am
    ‘Hiding mistakes is not a good look Mr Watts.’

    “”You guys should know, as we’ve discovered in spades. Cheap ad hom, the usual trollish tosh””.

    Cheap ad hom if I ever saw one. I’m not one of the guys, Richie.

    In fact, I was surprised Mr Watts posted my note to him in this tread. Would you claim that Climate models are unbalanced by modelling from partial records? Your statement does not take into account my opinion posted in the tread that was discontinued?

    Balance mate, without it, this Blog would be just like Real Climate.

  32. “If you take away natural and anthropogenic forcings, what else is there that can alter the climate?”

    There are 3 answers to this.

    1. Natural, internal to the climate, cycles

    2. The climate modellers effectively define a forcing to be anything they think is a forcing. There may well be additional forcings not currently accounted for, such as galactic cosmic rays, which even Real Climate seems to concede affects climate.

    http://www.realclimate.org/index.php/archives/2009/04/aerosol-effects-and-climate-part-ii-the-role-of-nucleation-and-cosmic-rays/

    3. Anything that affects feedbacks. In particular, anything that affects the phase changes of water. Although, after point 2, the definition of a forcing has been (apparently) expanded to include changes to feedbacks. Indirect aerosol effects is in part, a change in cloud feedback. Black carbon affects the snow/ice albedo feedback.

  33. Mosher:
    I agree, if you the principles behind GIGO you can get any answer you want. Just like they did for the model mean.

  34. Brian H says: “One possible take is that the list of “forcing” candidates is incomplete. A “failure of imagination”??”

    Or far too much imagination.

    G. Karst says: “So glad to see you summarizing your important conclusions at the end of your reports. It really helps tie it all together. Thanks!”

    Amen². Like the teacher whose students scored highly on state exams said: “First I tell ‘em what I’m gonna tell ‘em, an’ then I tell ‘em, an’ then I tell ‘em what I told ‘em.”

    George E. Smith; says: “…So if you are into string theory, and/or parallel universes, or even intelligent life in THE universe, what are you going to tell your grandchildren on your death bed, that you did for the good of mankind ??”

    How about: “I didn’t publish with Michael Mann?”

    John West says: “…Please list, starting with why the trade winds weaken.”

    May I guess? Is it the will of Aeolus, God of the Winds? Or upwelling cold water raising the density and viscosity of eastern Pacific ocean and atmosphere?

  35. Bob Tisdale:

    I believe you have fallen into a commonly held fallacy that makes the IPCC’s conjecture appear to be testable when it is not. The test of a conjecture features a comparison of the predicted outcomes of independent statistical events to those observed.

    The IPCC’s conjecture fails to be testable in two ways. These are:
    1) While the IPCC’s models make projections, they do not make predictions and,
    2) The independent observed statistical events are not identified by the IPCC.

    The IPCC’s conjecture does not rise to the level of a “hypothesis” because it is not testable. I’d be pleased to amplify my remarks if this would be of interest to anyone.

  36. R. Gates wrote:

    Climate is not a random walk, nor does it exist in a state of quantum uncertainty.

    This is true only if you define Climate as the ensemble of all random-walks, including weather and longer term variability. If that is your definition, then whether or not we actually know Earth’s current climate is at best in dispute.

    My own personal take is that while Tisdale has done good work, it should not have been necessary. I offer that if the GCMs have an ensemble average instantaneous value (such as cloud cover) that does not approximate the observed average value, they do not actually represent current climate. Forecasting, and even hindcasting are unnecessary if the present can’t be modeled correctly.

  37. steven mosher says: “Bob you cannot simply compare model means to observations with first accounting for the differences in coverage…”

    Actually, I can and did and I’ve explained why. My discussions in this post and in Part 1 are in agreement with the IPCC’s depictions and discussions of the model replication of GLOBAL Surface Temperatures in Chapter 9 of AR4, specifically about their Figure 9.5. I initially used Global HADCRUT because the IPCC used it. And I used the global ensemble member mean of the CCSM3, ECHO-G, GFDL-CM2.0, GFDL-CM2.1, GISS-EH, GISS-ER, INM-CM3.0, MIROC3.2(medres), MRI-CGCM2.3.2, PCM, UKMO-HadCM3, and UKMO-HadGEM1 models because the IPCC used them. I divided the surface temperature record into two warming and two “flat temperature” periods because the IPCC described those periods in chapter 3 of AR4. If I had selected other models, other time periods, and another observations dataset, I would have had complaints about those.

    With respect to your closing note about the use of model data for only the grids in which observational data appears, as you should be aware, there is little difference during the 20th Century between observation-based Surface Temperature datasets that use only the grids in which data appears like HADCRUT, and those that infill using 1200km smoothing like GISS LOTI, or infill using EOF analyses like the NCDC’s Land+Ocean data. So that additional part of the IPCC analysis, while its nice because it has additional detail, it has little bearing on the results of this post.

    But if that difference still concerns you, I’ve already plotted and compared the data. The multi-model ensemble mean of the IPCC Figure 9.5 is based solely on only those grids, and that data provided similar results to those illustrated in this post. I presented them in the post “The IPCC Says… – The Video – Part 1 (A Discussion About Attribution)”:

    http://bobtisdale.wordpress.com/2011/11/29/the-ipcc-says-the-video-part-1/

    Here’s the late warming period:

    And the mid-20th century “flat temperature” period:

    And the early warming period:

    And the early “flat temperature” period:

  38. Thanx Bob. After these series of posts I think I have a better understanding of “ensemble” model runs.
    Let me see if I can put myself in the shoes of an IPCC climate modeller.

    * We start with what we think we know i.e. GHG forcings. We run the model. Oh oh, it’s way way out. So now we adjust/tweak/fudge the forcings and feedbacks that we know not much at all about.
    * Now that’s more like it, we got the 2nd half of the 20thC not bad at all. But the 1st half of the 20thC is way way out.
    * But that’s OK, because we stated that we know not much about all the other forcings/feedbacks, we have ourselves a free ticket to adjust them as much as we like.
    * That’s a little better, even though the late 20thC is not as good as the first run, the early 20thC is quite a bit better (though still not much chop) but that’s OK, people will focus on the recent years rather than the early years.
    * Yep all good. We’ve run out of time and money anyhow. This will have to do for the AR4. We’ll get away with it, it’s not like some amateur blogger is going to be capable of deciphering all this anyway.
    * “Hey Pachy!! we got the proof you wanted”

  39. Bob et. al.,

    You still seem to have missed the bigger point– the failure of models to fully capture various dynamics (i.e. Interactions of real forcings) is not because those forcings are not there. The only thing that alters the climate is a forcing, natural or anthropogenic. There is no random walk in the climate. Chaotic systems are still deterministic. Not to pick on anyone, but this quote illustrates the absurdity of trying to put off the issue of a forcing. The quote is from John West, who tries to find an alternative to a forcing, and says:

    ” Well, to name just one, thermal inertia variations within the various mediums of the system as a whole would still be manifested in global average temperature variations. In other words, if we were able to hold all forcings completely constant there would still be variations in global average temperature due to the enormous complexity and variability of the way heat moves through…”

    What he talking about here is Chaos, and despite some people’s confusion about it, Chaotic systems are still deterministic, and not random walks. A combination of forcings in a dynamical chaotic system, still are forcings, and the only kind are natural or anthropogenic, which are of course one in the same as humans are a natural part of the universe.

  40. I believe you will find that a graph of NASCAR attendance neatly fit the rise in temperature change from 1975 to 2005. Do the models include increases in spectator sport attendance as a forcing parameter? I believe the 1910 to 1940 rise can be attributed to baseball.

  41. R. Gates says: “You still seem to have missed the bigger point– the failure of models to fully capture various dynamics (i.e. Interactions of real forcings) is not because those forcings are not there…”

    And you still miss the obvious. If the models fail to “fully capture various dynamics (i.e. Interactions of real forcings)”, and because of that failure, they are unable to replicate the past variations in global surface temperature, the failure of the models to “fully capture various dynamics (i.e. Interactions of real forcings)” would also indicate that they serve no purpose as a tool to project future global surface temperatures.

  42. R. Gates says:
    “tries to find an alternative to a forcing”

    No, I’m trying to 1) explain the difference between an attribute of climate (global average temperature) and climate; and 2) show how variations in global average temperature although the result of forcings and feedbacks are not homogeneously manifested throughout the climate.
    Again, if we were able to hold all forcings constant how long would it take for the average global temperature to stabilize to the point where it would be exactly the same year after year? That’s what you’re saying would happen.

    “What he talking about here is Chaos”

    Again, no, chaos describes a system in which very slight changes in initial conditions make huge differences in results. I’ve seen no evidence that the climate is chaotic, even though it is very complicated.

  43. Bob,

    You’ve not addressed at all the key issue I had with your summary from part 1, where you said:

    “The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. ”
    ___

    So, please give an example of global surface temperatures varying without some natural or anthropogenic forcing. The failure of models to accurately hindcast has nothing to do with the actual causes behind global temperature variations, but everything to do with how thorough the models have captured those forcings. The point is, there will always be causes, as those variations are not a random walk. Otherwise, the entire study of climate is no better than studying a roulette wheel. ( and sadly, some skeptics probably believe that).

  44. Gates, let an expert explain it for you:

    For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work suggests that this variability is enough to account for all climate change since the 19th Century.
    ~Prof Richard Lindzen

    Skeptics understand that climate is not like a roulette wheel. It’s the UN/IPCC that peddles that sort of nonsense.

  45. R. Gates wrote:

    What he talking about here is Chaos, and despite some people’s confusion about it, Chaotic systems are still deterministic, and not random walks.

    Suggesting that statement applies to weather and climate is a very serious claim. If you have a sufficient grasp of physics to prove that, I believe Carl XVI Gustaf has a medal for you.

  46. R. Gates says:
    December 12, 2011 at 7:17 pm

    …Otherwise, the entire study of climate is no better than studying a roulette wheel. ( and sadly, some skeptics probably believe that)…

    That is a distinct possibility under the current state of “Climate Science”. Bob and everyone else is doing their best to change that state. Won’t you become part of the solution… instead of part of the problem. GK

  47. So, please give an example of global surface temperatures varying without some natural or anthropogenic forcing.

    That obviously can’t be done. You asking for proof of absence of an undefined list of forcings.

    The failure of models to accurately hindcast has nothing to do with the actual causes behind global temperature variations, but everything to do with how thorough the models have captured those forcings.

    You appear to be positing some future incarnation of the forcings model that axiomatically will accurately hindcast. Bob and others are obviously referring to the current incarnation of the forcings model.

    And I’ll note you haven’t addressed my point that climate can be affected by factors that affect feedbacks. Unless you say all factors affecting feedbacks are automatically forcings. But then it starts to look tautological.

    You appear to me to be saying that in a deterministic system there are always causes, which of course I agree with, and any cause that affects temperature is automatically a forcing. Not a very helpful definition IMO.

  48. I’m confused by this statement also:

    “…global surface temperatures are capable of varying without natural and anthropogenic forcings.”

    If a forcing is not natural or anthropogenic, what other category could it fall under? Supernatural? What other possibilities are there?

  49. If by definition there are forcings, they may be
    a) individually weak
    b) not included in the models
    c) unknown.

    Pick one, two or three of the above.

  50. Smokey says:
    December 12, 2011 at 7:31 pm
    Gates, let an expert explain it for you:
    For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work suggests that this variability is enough to account for all climate change since the 19th Century.
    ~Prof Richard Lindzen
    Skeptics understand that climate is not like a roulette wheel. It’s the UN/IPCC that peddles that sort of nonsense.
    ———-
    Smokey,

    “The motions of massive oceans where heat is moved between deep layers…” is a completely deterministic process. And certainly is does provide variability as Lindzen so describes, and that creates a forcing on the climate. But it is a critical error to suggest that variability does not amount to a natural forcing as certainly it does. From Milankovitch cycles to solar variations lasting decades or centuries, greenhouse gas concentrations and volcanic activity, all these provide natural variability that is both deterministic and chaotic.

    It is not just the honest skeptics who understand that the climate is not like a roulette wheel, but all climate scientists, who with each new discovery, fill in a bit more of the full deterministic interactions of this amazingly complex and beautiful climate system.

  51. Philip Bradley says:

    “And I’ll note you haven’t addressed my point that climate can be affected by factors that affect feedbacks. Unless you say all factors affecting feedbacks are automatically forcings. But then it starts to look tautological.”

    ———–
    Please give some examples of what you consider to be “factors that affect feedbacks”. In my mind, the very laws of physics right down to the quantum level could be considered as a factor affecting feedbacks. For example, the absorption spectrum of both water vapor or CO2 is determined by the specific electron configuration of the molecules, and this is a factor that affects feedback. The distinction between a factor affecting a feedback and a forcing is one of form versus actual function. The factors are forms that determine function…i.e. The laws of physics are the form, and the actual variation in solar energy striking the earth or LW being absorbed by greenhouse gases are the functions.

  52. Brian H says:
    December 12, 2011 at 10:01 pm
    If by definition there are forcings, they may be
    a) individually weak
    b) not included in the models
    c) unknown.

    Pick one, two or three of the above.

    ———-
    Forcings may:
    1) be weak or strong
    2) be short-term, long-term, and everything in between
    3) be included or not in the models and to greater or lessor degrees of accuracy
    4) be known or unknown
    5) be natural or anthropogenic
    6) work in concert or in opposition to each other
    7) produce their own unique set of feedbacks or unique combinations of feedbacks when multiple forcings are at work

  53. Louis says:
    “If a forcing is not natural or anthropogenic, what other category could it fall under? Supernatural? What other possibilities are there?”

    He’s merely stating that global average temperature is subject to “drift” or variation without an external forcing, i.e.: internal variability. This part is not controversial (see Gavin quote above). The post demonstrates quite well that the climate modelers’ claim that “ensembles” of climate models cancel out similar internal variation revealing the projected global average temperature due to forcings and thus mimics the climate is not supported by historical observations.

    Either the models are missing significant forcings, or, ensembles of models don’t cancel out internal variation analogous to the climate, or, the models don’t accurately approximate forcing vectors. The ensembles of models don’t accurately reflect the effect of forcings, feedbacks, and internal variability on global average temperature as evidenced by it’s inability to accurately hindcast the 1917-1944 0.175 deg C/decade warming, the 1944-1976 warming hiatus, AND the 1976-2000 0.175 deg C/decade warming. The fact that the ensemble matches one of these three periods is insufficient verification that it can be trusted to accurately project future climate trends as claimed.

  54. Dishman says:
    December 12, 2011 at 5:06 pm
    R. Gates wrote:

    Climate is not a random walk, nor does it exist in a state of quantum uncertainty.

    This is true only if you define Climate as the ensemble of all random-walks, including weather and longer term variability. If that is your definition, then whether or not we actually know Earth’s current climate is at best in dispute.

    ——–
    Of course I would not define climate an the ensemble of random walks, but rather the physical manifestation of the sum product of all actual forcings working through specific laws of physics to create, control, and otherwise manipulate the flow of energy to, from, and within Earth’s atmosphere, hydrosphere, and biosphere.

  55. R.G.;
    My list of 3 was not an exhaustive list of types/characteristics of forcings, merely a statement of what may render any claim to have such an exhaustive and well-characterized list dubious or non-functional. Particularly one composed of/headed by candidates selected by exclusion or argument from ignorance. Such as CO2.

  56. Without getting into a long discussion about what is a feedback, I’ll define it as any net climate warming/cooling process that is not directly a radiative forcing.

    An example of a feedback is heat transport upwards in the atmosphere by water vapor and then heat release thru condensation/precipitation. This process occurs primarily as a consequence of radiative heating of the surface.

    Anything that affects the speed of this process will affect the speed of heat loss to space. Make it faster and the climate cools. Recent studies show a large effect by aerosols on water vapor condensation/precipitation. While the studies didn’t measure the time from surface evaporation to condensation/precipitation, its a reasonable inferrence that they accelerate the process. Thus acting to cool the climate by affecting a feedback.

  57. R. Gates says: “You’ve not addressed at all the key issue I had with your summary from part 1…”

    Thanks for the reminder. You began that comment with, “So, please give an example of global surface temperatures varying without some natural or anthropogenic forcing.”

    When you read the term “internal variability” in a paper, don’t you interpret that to mean unforced?

    I’m sure you’ve heard of the Atlantic Multidecadal Oscillation, which is principally expressed as natural multidecadal variations in the Sea Surface Temperature anomalies of the North Atlantic. It is considered to be one of the forms of internal, unforced climate (surface temperature and pressure) variability. One of the key papers to investigate the process was Knight et al (2005):

    http://holocene.meteo.psu.edu/shared/articles/KnightetalGRL05.pdf

    They isolated the internal variability by maintaining constant levels of external forcing. So it is believed the surface temperature of the North Atlantic can vary on a multidecadal basis without “some natural or anthropogenic forcing” causing those variations.

  58. Ray Berger says: “Bob,if you have time to do so, could you please comment on this new approach here …”

    Huber and Knutti (2011) is a climate model study. Enough said.

  59. It’s quite informative to look at what the adjustments made by hadSST3 look like.

    Here is a plot of the difference between hadSST3 and the ICOADs data it is based on:

    Note the vertical temperature scale here, The adjustments are almost a big as the whole of 20th warming we’re all supposed to sacrifice our futures to for. So do they make sense? Here’s some points worth noting:

    The post war cooling discontinuity gets partially “corrected” when it occurred (which was from one month to the next) the rest gets faded it in a way that reduced the trough before 1960 as Bod noted. If you look at the difference between 1939 and 1946 there was still a huge 0.15C cooling over such a short period, that would be remarkable. In fact they only correct half the discontinuity.

    The huge warming of the pre war period. What’s this about ?

    Well from 1885 to 1920 there is a 0.3 warming “correction”. From then on to 1940 a 0.1 cooling. If we recall what Bob says about the inconvenient dip and rebound you can see they’ve fixed the model by changing the data.

    Finally a little up tick after 2000 the “hide the decline”.

    In fact looking at the general form of this adjustment it has what looks like a cyclic trend of about 140y that hit it’s trough around 1990. Did someone say “natural” cycles ?

    Well there used to be one, but this adjustment just happens to be the other way up.

    Now I’m not suggesting that Thompson et al and the rest of the Met Office are a bunch of crooks trying to deceive the world by removing any natural trends form the data and frig the data to fit their super computer models. But if they were , they may well be tempted to produce something very similar to the adjustments contained in hadSST3.

    If anyone wanted to look for a natural cycle , I think the M.O. have done a great job of identifying for us.

  60. Bob T. et. al.,

    I appreciate the general perception by some that the fluctuation, or as some call “internal variability” of the AMO does not stem from some forcing, but this perception is by no means universal. I would direct your attention to just a few examples:

    http://meetingorganizer.copernicus.org/EGU2009/EGU2009-4926.pdf

    http://www.nature.com/ngeo/journal/v3/n10/full/ngeo955.html

    http://www.sciencedirect.com/science/article/pii/S0273117707005418

    When I hear the term “natural variability” or “internal variability” I immediately get the notion that “we really don’t know what is causing these fluctuations, so we’ll just call it ‘natural or internal variability’.

  61. R. Gates: Your opinion is noted. That doesn’t mean I agree with it. I’ve read it. We’ve both shown that we can provide links to climate studies that support forced or unforced variability of North Atlantic Sea Surface Temperatures. That’s a no-win discussion. So let’s change tacks.

    We’ll change roles. Now I’ll ask you a question. Let’s make it two. They’re easier to phrase as two sentences. There’s a long introduction, though.

    In all of the early warming period (1917-1944) graphs above, surface temperatures are shown to be warming at rates that are around 3 times faster than the model mean data. And the rates at which surface temperatures rose in the early period are always comparable to the surface temperature trends in the late warming period (1976-2000). And based on the quotes from Gavin Schmidt and NCAR, we have interpreted the model mean to be the forced component of the climate models. And let’s assume for these questions that the observational data are correct and the forced components (model mean) are correct; the climate scientists have had 20plus years to get the forcings right, and the climate modelers have also had 20plus years to tune the climate models so that they respond properly to the forcings. You claim the difference between trends of the model mean and the trends of the observed rise in temperature during the early warming period cannot be caused by an unforced component, because unforced components don’t exist. How then do you explain (in plain English, for those without technical backgrounds who are reading this) the additional rate at which surface temperatures rose during that early warming period? And why are the rates at which surface temperatures rose during the early warming period (1917-1944) and the late warming period (1976-2000) comparable, while the forcings have risen by a factor of about three?

  62. “The failure of the models to hindcast the early rise in global surface temperatures also illustrates that global surface temperatures are capable of varying without natural and anthropogenic forcings. ”

    I vote for super-natural forcings.

    In this situation (all forcings removed – except the super-natural ones) the Earth’s temperature *will* vary. The variations will be on a steadily decreasing trendline; it will grow cold. All of its internal heat will fade away and radiate into space.

    This has nothing to do with the failure of models, it’s gobbledy-gook purdeed up with graphs.

  63. Bob T. asked:

    Question 1: “How then do you explain (in plain English, for those without technical backgrounds who are reading this) the additional rate at which surface temperatures rose during that early warming period?”

    Question 2: “And why are the rates at which surface temperatures rose during the early warming period (1917-1944) and the late warming period (1976-2000) comparable, while the forcings have risen by a factor of about three?”

    _____
    As a background to answer both these questions, let me just take a brief detour for a moment to go back to my fundamental position, and that is that all climate change is related to a forcing or more common, a combination of forcings. So called “natural variability” or “internal variability” is simply a chaotic system of forcings that haven’t been (or can’t be, based on our current mathematics) put into a model. The very fact that there is a periodicity to the AMO, for example, is a wonderful example of a chaotic oscillating system. This is a system that is deterministic, not random, and thus some forcing or combination thereof drives it, we simply haven’t the tools to fully model it.

    Now, to answer your questions in plain English. Not all models do as poor a job at hindcasting the rapid rise in temperatures during the period of 1917-1944, nor of course, the end of the 20th century during the period of 1976 – 2000. I would highly advise you and others read this full research article:

    http://onlinelibrary.wiley.com/doi/10.1002/wcc.18/full#fig2

    In simple terms, some (but not all) models failed in their ability to model the early 20th century warming, as the totality of solar forcing influences, including EUV affects on ozone and stratospheric circulation (which have only recently been quantified and added to some models), were underestimated, and when included, match up well with the early 20th century temperature rise. See:

    http://scostep.apps01.yorku.ca/wp-content/uploads/2010/07/Gray_etal_2009RG000282.pdf

    This answers question 1. Which is– the models which take into account to full range of solar forcing do a good job at displaying the early 20th century temperature increases.

    For question 2, your assumption seems to be that the same apparent effect (i.e. a temperature rise) has the same combination of forcings as its cause, and though the temperature rise in the early 20th century might appear to be the same as the later 20th, the models would actually disagree with you, and even very nicely break down the different types of forcings (natural and anthropogenic) that caused each period of temperature increases. So to say that the “forcings have risen by a factor of 3″ is an incorrect statement. Some forcings decreased while some forcings increased. What matters to make up the climate is of course the net result of all forcings combined. Furthermore, the early 20th century temperature rise (as measured by troposphere temps or ocean temps, was not identical to the late 20th century temperature rise, in that stratospheric temperatures were rising in the early 20th century, but were falling in the later 20th century. Thus, the early 20th century temperature rise has a more classic solar influence signature and of course, the later 20th century, would be more indicative of greenhouse gas forcing. A perfect example of the combination of forcings leading to a net result that is not as simple as saying “a tripling of forcings” is the flattening of temperatures during the last decade, during which time the decreased activity of the sun, increased volcanic activity and human aerosol creation have pretty much masked the visible greenhouse gas forcing for the time being. I would love to see a model run with the current low solar output combined with the additional aerosols and the extended La Nina period but with CO2, N2O, and methane kept at pre-industrial levels. It will be interesting to see which prevails in the short-term, though in the long-term, the continually rising greenhouse gases certainly will.

  64. Thus, the early 20th century temperature rise has a more classic solar influence signature and of course, the later 20th century, would be more indicative of greenhouse gas forcing.

    There is evidence to support this position. For example the paper below. Which shows solar effects have a better correlation with 20th century temperatures up until 1990

    http://adsabs.harvard.edu/abs/2008AGUSMGC43A..06M

    But then we run into the CO2 lag problem that CO2 rises lag temperature rises since 1970. Causation can never work backwards.

    http://www.nature.com/nature/journal/v343/n6260/abs/343709a0.html

    Its a pet issue of mine that determining what causes (say annual) changes in the climate is a fairly easy statistical exercise assuming you have decent measurements. Climate science avoids these analyses like the plague because they show CO2 changes have close to zero correlation with annual temperature changes.

    http://www.scirp.org/Journal/PaperInformation.aspx?paperID=3447&utm_source=newsletter&utm_medium=ijg13&utm_campaign=01

    What they resort to (as R Gates does) is a priori arguments. Which is OK as long as your underlying theory is sound. If it isn’t, a priori arguments are worthless.

    So we come down to our point of difference, is the Forcing model a valid theory of climate change?

    I think the data indicates it doesn’t have sufficient explanatory/predictive power to be a valid theory of climate change, and Bob’s analysis adds to this evidence.

    No amount of a priori argument will persuade me. Only data will.

    Finally, it is a perfectly valid scientific answer to say ‘we don’t know what causes climate change’.

  65. R. Gates says: “Not all models do as poor a job at hindcasting the rapid rise in temperatures during the period of 1917-1944, nor of course, the end of the 20th century during the period of 1976 – 2000. I would highly advise you and others read this full research article: http://onlinelibrary.wiley.com/doi/10.1002/wcc.18/full#fig2”

    The period in question is not the late warming period of 1976-2000. As illustrated in this post, the models have problems with the early warming period of 1917-1944. If you were to look at Figure 6 of Lean 2010 you’d note Judith Lean’s empirical model actually does a very poor job of recreating the rise in surface temperature from 1917 to 1944. It’s comparable to the poor job of the model mean of the IPCC’s models from Figure 9.5 of AR4. And I believe our discussion pertains to the general circulation models used in AR4 for their hindcast comparison. Please advise the readers who are following this thread what coupled ocean-atmosphere model was presented by the Lean 2010 paper that you linked. I’ll save you some time. There wasn’t one. The empirical model (based on linear regression analysis) was presented in the Lean and Rind (2009) paper “How will Earth’s surface temperature change in future decades?”

    http://www.unity.edu/facultypages/womersley/2009_Lean_Rind-5.pdf

    R.Gates says: “In simple terms, some (but not all) models failed in their ability to model the early 20th century warming…”

    In reality, the model mean for all the CMIP3 models chosen by the IPCC for their Figure 9.5 model-observations comparison (and reproduced as Figure 27 in your linked Gray et al 2009) “failed in their ability to model the early 20th century warming”:

    I produced Table 1 for part 1 of this post, but I decided it would detract from the post, so I didn’t include it.

    R.Gates says: “This answers question 1. Which is– the models which take into account to full range of solar forcing do a good job at displaying the early 20th century temperature increases.”

    The model data (Table 1) disagrees with your assumption, because no models “do a good job at displaying the early 20th century temperature increases”.

    R. Gates says: “For question 2, your assumption seems to be that the same apparent effect (i.e. a temperature rise) has the same combination of forcings as its cause………..”

    I’ll re-ask the second question. I accidentally wrote “forcings” instead of “forced component of the models.” My mistake. The forcings had risen by a factor of four. It was the forced component (the model mean) that rose by a factor of three. Sorry. Here’s the question again:

    And why are the rates at which surface temperatures rose during the early warming period (1917-1944) and the late warming period (1976-2000) comparable, while the forced component (the model mean) have risen by a factor of about three?

  66. Posted on December 12, 2011 by Anthony Watts
    Do Observations and Climate Models Confirm Or Contradict The Hypothesis of Anthropogenic Global Warming?
    Guest Post by Bob Tisdale

    “Climate Models, on the other hand, do not recreate the rate at which global surface temperatures rose during the early warming period. They do well during the late 20thCentury warming period, but not the early one. Why? Because Climate Models use what are called forcings as inputs in order to recreate (hindcast) the global surface temperatures during the 20th Century. The climate models attempt to simulate many climate-related processes, as they are programmed, in response to those forcings, and one of the outputs is global surface temperature. …. The forcings-driven climate models have shown no skill whatsoever at replicating the past, so why is it assumed they would be useful when projecting the future?

    I think the data available have to explained by physical processes and logic; a model that includes not the physics nor the well known relations and laws of nature cannot put out more than what is still known or unknown.

    It would be great to simulate the hole dynamic solar system as perpetual motion using the physics of thermodynamic, heat currents and heat sources, and if this do not match with the size of a computer, we have to make it by Excel or by hand.

    An example may be the nature of the sea level profiles in time. The University of Colorado has puplished a graph of the global mean sea level with a linear fit of the rising level of about 3.2 mm per year since 1993.

    Beside some anomalies visible after a 60 day smooth a dropping of the sealevel of 6 mm in the year 2010 was discussed in the science community. But a detailed view on the oscillations before the 60 day smooth shows that the frequency of the mean oscillation superimposed to the linear increasing sea level is about 117 maxima in 18.655 years or 6.271 periods per year.

    This is remarkable, because the synodic frequency of the planet couple of Mercury/Earth of 3.1519 periods per year [sf_me_er = 4.15194 – 0.99996 = 3.1519 y^-1] is exactly the half of the sealevel oscillation frequency from Jason-2 data of 6.271 periods per year.

    It seems that there is no cause visible for a longtime linear sealevel rise of 3.2 mm per year, but I think it makes sense to analyse the mysterious superimposed frequency of twice the synodic Mercury/Earth frequency for several reasons.

    The tidal effect on the Sun from the different bodies are, if Earth = 1.0: Jupiter = 2.26, Venus 2.15, Mercury at perihelion = 1.9, and Mercury at aphelion = 0.54. This means that the tide effects from the couple of Mercury/Earth on the Sun are varying depending on the distance of Mercury from its eccentric path.

    The question comes up, why the measured global sealevel do rise and fall synchronous with the tide system of Sun, Mercury and Earth. But moreover, the oscillations of the global temperature measured from UAH fits in this tide profile:

    In the strong period of the sealevel height variation of 6.271 periods per year are several phase jumps, and amplitude variations which suggest one or more similar frequencies as the synodic tide frequency of Mercury/Earth. And adding three more synodic pattern from the couples of Venus/Earth, Mercury/Jupiter, and Earth/Jupiter because of their expected high tide effects the blue line results as a sum of the solar tide effects corresponding in geometry with the sea level oscillations.

    This shows that here is a connection between the solar tide effects and the global temperatures on Earth, along with a time coherent global sea level swing of some mm.

    A rough calculation shows that a temperature change of 0.1°Cel of 1000 m deep ocean part results in a height change of ~23 mm because from the property of water. Whatever is the cause of this sea level dynamic of the substracted 3.2 mm per year function, solar tide effects of some planets play a role in this up and downs of as well sea level and SST. There are hints that other solar tide like couples beyond Jupiter can take a long term increase of the global temperature, which could explain the seeming ‘linear’ ocean height rise of 3.2 mm per year in the last 18 years.

    However, it will take good ideas from physics to explain the shown connection between solar tide geometries, the sea level change and terrestrial climate frequencies as a heat current from the Sun.

    I think this method is superior to simple models feed with functions of no base in real nature, especially because it has the capability to simulate climate of the history and of the future in high fidelity.

    Science has not to show what is not. Science has to show what IS.

    V.

  67. Is what’s good for the goose not good for the gander?

    On RC Rasmus Benestad discredits the predictive value of a statistical model presented in a recent paper by hindcasting and demonstrating the hindcast does not match the proxy derived temperature tends.

    “It is well known that one can fit a series of observations to arbitrary accuracy without having any predictability at all. One technique to demonstrate credibility is by assessing how well the statistical model does on data that was not used in the calibration.”
    http://www.realclimate.org/index.php/archives/2011/12/curve-fitting-and-natural-cycles-the-best-part/#bib_1

    Bob Tisdale has discredited the predictive value of the AR4 models as well as the ensemble with the same technique, yet this is somehow inappropriate here (WUWT) on “physical” models but is appropriate at RC on “statistical” models. [see above]

    Are we to believe that because of their authority on the subject that even though their models don’t hindcast well that they still have predictive value even though they dispute the predictive value of another’s model because it doesn’t hindcast well?

    Why shouldn’t physical models be able to demonstrate predictive value in the same way as statistical model is expected to be able to?

    Why should “experts” get a pass on the test that they use to criticize “amateurs”?

    We’ve recently seen evidence that even models that hindcast well may not have predictive value. That internal variation is inherently unpredictable and its magnitude overwhelms the conjectured anthropogenic forcing.

    http://wattsupwiththat.com/2011/12/13/csus-klotzbach-and-gray-suspend-december-hurricane-forecast/

    Why should models that don’t hindcast well be considered predicatively skillful?

    As it has been said many times in many ways by many people on WUWT: How many years and how many ways must the models diverge from observations before they are rejected as useless video game novelties?

  68. John West says: “Bob Tisdale has discredited the predictive value of the AR4 models as well as the ensemble with the same technique, yet this is somehow inappropriate here (WUWT) on ‘physical’ models but is appropriate at RC on ‘statistical’ models.”

    Who said my analysis was inappropriate?

Comments are closed.