An Initial Look At The Hindcasts Of The NCAR CCSM4 Coupled Climate Model

National Center for Atmospheric Research (NCAR...

NCAR Image via Wikipedia

Guest post by Bob Tisdale

OVERVIEW

This post compares the instrument observations of three global temperature anomaly datasets (NINO3, Global, and North Atlantic “Plus”) to the hindcasts of the NCAR couple climate model CCSM4, which was used in a couple of recent peer-reviewed climate studies. Those studies relied solely on the models and do not present time-series graphs that compare observational data to the 20thCentury hindcasts to allow readers to determine if the NCAR CCSM4 model has any basis in reality. So far, I have not seen this done in any of the blog posts that have discussed those studies.

(And for those wondering, NCAR is the National Center for Atmospheric Research, and the CCSM4stands for Community Climate System Model Version 4. CCSM4 is a coupled climate model.)

This post would have been much easier to prepare if the Sea Surface Temperature outputs of the NCAR CCSM4 were available through the Royal Netherlands Meteorological Institute (KNMI) Climate Explorer website. Then I would not have felt obligated to provide as many introductory explanations and graphs, and supplemental comparisons. (The post would have been easier to write, and easier to read.) But since the modeled Sea Surface Temperature data are not available yet, I’ll present, for the time being, the NCAR CCSM4 hindcast surface air temperature anomalies. As noted a number of times throughout this post, if and when the Sea Surface Temperature hindcasts are available through the KNMI Climate Explorer, I will be more than happy to update this post.

Note 1: The period used in this post runs from January 1900 to December 2005 because the NINO3 Sea Surface Temperature anomalies, used in one of the studies, have little to no source data before 1900, and the CCSM4 model hindcast ends in 2005.

Note 2: The source of data for this post, as noted above, is the KNMI Climate Explorer. Surface Air Temperature is available for the NCAR CCSM4 on their Monthly CMIP5 scenario runs webpage, but Sea Surface Temperature (identified as TOS) is not. Therefore, this post compares the Surface Air Temperatures anomalies (which over the oceans would be comparable to Marine Air Temperature anomalies) of the model outputs to Sea Surface temperature anomalies during the discussion of ENSO. And as you will see, this should not present any problems for this discussion. For the model-to-data comparisons of global and of North Atlantic “Plus” surface temperature anomalies, observed land plus sea surface temperature anomalies are compared to the modeled Surface Air Temperature anomalies for land and oceans. This is common practice in posts that compare instrument observations to model outputs at blogs such as Real Climate (Example post: 2010 updates to model-data comparisons) and Lucia’s The Blackboard, (Example post: GISTemp: Up during August!), and it assumes the modeled sea surface temperatures will be roughly the same as the modeled Marine Air Temperatures. (More on this at the end of the post.)

Note 3: This post does not examine the projections of future climate presented in the referenced papers. This post examines how well or poorly the CCSM4 ensemble members and model mean match the observations that are part of the instrument temperature record. You, the reader, will then have to decide whether the model-based studies that use the CCSM4 are of value or whether they should be dismissed as mainframe computer-crunched conjecture.

INTRODUCTION

The National Center for Atmospheric Research (NCAR) Community Climate System Model Version 4 (CCSM4) coupled climate model has been submitted to the Coupled Model Intercomparison Project Phase 5 (CMIP5)archive of coupled climate model simulations. (Phew, try saying that fast, three times.) And much of the data provided to CMIP5 for those models are presently available through the KNMI Climate Explorer Monthly CMIP5 scenario runs webpage. The model data stored in the CMIP5 archive will serve as a source for the next IPCC report, AR5, due in 2013. Refer to the Real Climate post CMIP5 simulationsfor further information.

Two papers based on the NCAR CCSM4 climate model have recently been published. There may be other published papers as well based on the CCSM4. The first is Meehl et al (2011) “Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods”, paywalled. Meehl et al (2011) is a model-based study that attempts to illustrate that ocean heat uptake can continue during decadal periods when global surface temperatures flatten. The La Niña portion of the natural climate phenomenon called the El Niño-Southern Oscillation (ENSO) was determined to be the possible cause for the flattening of surface temperatures and for the increase in Ocean Heat uptake. Yup, La Niña events. For me, the paper raised a number of questions. One of them was: Why did Meehl et al (2011) only discuss decadal hiatus periods, during which surface temperatures failed to rise? The instrument temperature record for 20thCentury clearly shows a multidecadal decline in surface temperatures from the mid-1940s to the mid-1970s that is attributable, in part, to a mode of natural variability called the Atlantic Multidecadal Oscillation. Doesn’t the NCAR CCSM4 simulate multidecadal variability?

(For an introductory discussion of the Atlantic Multidecadal Oscillation, refer to the post An Introduction To ENSO, AMO, and PDO — Part 2.)

The second paper is Stevenson et al (2011) “Will there be a significant change to El Niño in the 21st century?”, also paywalled, but a Preprint exists for it. Stevenson et al (2011) is also a model-based study that attempts to illustrate, from the abstract:

“ENSO variability weakens slightly with CO2; however, various significance tests reveal that changes are insignificant at all but the highest CO2 levels.”

In other words, there is little change in the model depiction of the strength and frequency of El Niño and La Niña events with increasing levels of anthropogenic greenhouse gases. The Stevenson et al (2011) abstract concludes with:

“An examination of atmospheric teleconnections, in contrast, shows that the remote influences of ENSO do respond rapidly to climate change in some regions, particularly during boreal winter. This suggests that changes to ENSO impacts may take place well before changes to oceanic tropical variability itself becomes significant.”

The NCAR “Staff Notes” webpage “El Niño and climate change in the coming century” provides this further explanation:

“However, the warmer and moister atmosphere of the future could make ENSO events more extreme. For example, the model predicts the blocking high pressure south of Alaska that often occurs during La Niña winters to strengthen under future atmospheric conditions, meaning that intrusions of Arctic air into North America typical of La Niña winters could be stronger in the future.”

I suspect we’ll be reading something to the effect of “oh, the cold temperatures were predicted by climate models”, referring to Stevenson et al (2011), if the coming 2011/12 La Niña winter is colder than normal in North America.

Since the El Niño-Southern Oscillation (ENSO) is a major part of both papers, let’s start with it. For those new to ENSO, refer to the post “An Introduction To ENSO, AMO, and PDO – Part 1for further information.

HOW WELL DOES CCSM4 HINDCAST CERTAIN ASPECTS OF ENSO?

Stevenson et al (2011) used NINO3 Sea Surface Temperature (SST) Anomalies as their primary El Niño-Southern Oscillation index. NINO3 is a region in the eastern equatorial Pacific with the coordinates of 5S-5N, 150W-90W. Its sea surface temperature anomalies are used as one of the indices that indicate the frequency and magnitude of El Niño and La Niña events. Unfortunately, the CCSM4 modeled Sea Surface Temperature data are not available for download through the KNMI Climate Explorer as of this writing. That requires us to use the model’s Surface Air Temperature output in the comparisons. The second problem is that the comparable instrument observations dataset, Marine Air Temperature, for the NINO3 region becomes nonexistent before 1950, as shown in Figure 1, and we’d like the comparison to start earlier than 1950.

Figure 1

The other option is to compare Sea Surface Temperature observations for the NINO3 region to the Surface Air Temperature outputs of the model. This should be acceptable in the equatorial Pacific since there is little difference between the observed NINO3 Sea Surface Temperature anomaly and Marine Air Temperature anomaly data. Figure 2 compares observed Sea Surface Temperature anomalies and Marine Air Temperature anomalies from January 1950 to December 2005. As illustrated, there are differences in the magnitude of the year-to-year variability between the two datasets. The Sea Surface Temperature anomalies vary slightly more than the Marine Air Temperature anomalies. But the timing of the variations are similar, as one would expect. The correlation coefficient for the two datasets is 0.92. Also note that the linear trends for the two datasets are basically identical. As mentioned earlier, the comparison of NINO3 Sea Surface Temperature Anomaly observations to the Surface Air Temperature hindcasts for the same region should be reasonable—at least for the purpose of this introductory post.

Figure 2

As I’ve illustrated in numerous earlier posts here, the long-term trend, since 1900, of the more commonly used NINO3.4 Sea Surface Temperature anomalies is basically flat. The same holds true for NINO3 Sea Surface Temperature anomalies, as shown in Figure 3. Based on the linear trend, there has been no rise in the Sea Surface Temperature anomalies for the NINO3 region since 1900. El Niño events dominated from 1900 through the early 1940s, La Niña events prevailed from the early 1940s to the mid 1970s, and then from 1976 to 2005, El Niño events were dominant. In other words, there is a multidecadal component to the frequency and magnitude of El Niño and La Niña events. That’s the reason a trend appears in the data that starts in 1950, Figure 2; the shorter-term data begins during a period when La Niña events dominated and moves into an epoch when El Niño events dominated.

Figure 3

So how well do the ensemble members and ensemble mean of the CCSM4 hindcasts of NINO3 Surface Air Temperatures compare to the NINO3 Sea Surface Temperature observations? Refer to Animation 1. Each of the six ensemble members and the ensemble mean are illustrated by individual graphs. The ensemble member graphs change every three seconds, while the ensemble mean remains in place for six seconds. (This also holds true for Animations 2 and 3.)

Animation 1

The first thing that’s obviously different is that the frequency and magnitude of El Niño and La Niña events of the individual ensemble members do not come close to matching those observed in the instrument temperature record. Should they? Yes. During a given time period, it is the frequency and magnitude of ENSO events that determines how often and how much heat is released by the tropical Pacific into the atmosphere during El Niño events, how much Downward Shortwave Radiation (visible sunlight) is made available to warm “and recharge” the tropical Pacific during La Niña events, and how much heat is transported poleward in the atmosphere and oceans, some of it for secondary release from the oceans during some La Niña events. If the models do not provide a reasonable facsimile of the strength and frequency of El Niño and La Niña events during given epochs, the modelers have no means of reproducing the true causes of the multiyear/multidecade rises and falls of the surface temperature anomalies. The frequency and magnitude of El Niño and La Niña events contribute to the long-term rises and falls in global surface temperature.

Of even greater concern are the NINO3 Surface Air Temperature linear trends exhibited by the CCSM4 model ensemble members and model mean. As discussed earlier, there has been no rise in eastern equatorial Pacific sea surface temperature anomalies from 1900 to present, yet the CCSM4 ensemble members and mean show linear trends that are so high they exceed the rise in measured global surface temperature anomalies. In the real world, cool waters from below the surface of the eastern equatorial Pacific upwell at all times except during El Niño events. It is that feed of cool subsurface water that helps to maintain the relatively flat linear trend there.

The trend in the NCAR CCSM4 NINO3 Surface Air Temperature anomaly hindcast is consistent with their hindcast of NINO3 Sea Surface Temperature anomalies from their previous version of the CCSM coupled climate models, which was the CCSM3. Figure 4 compares observed NINO3 Sea Surface Temperature anomalies to the hindcast of the CCSM3. (There is only one CCSM3 model run of Sea Surface Temperatures available through the KNMI Climate Explorer.) While the trend of the CCSM3 hindcast of NINO3 Sea Surface Temperature anomalies may not be as high as the trend of the CCSM4 hindcast of NINO3 Surface Air Temperatures, they are still showing a significant trend.

Figure 4

And to contradict this, the NCAR website presents NINO3.4 SST anomalies (They do not provide NINO3) with a flat trend over this period. Refer to Figure 5. So it appears as though NCAR understands that eastern equatorial Sea Surface Temperatures have not risen since 1900, based on the linear trend. Yet for some reason, their CCSM4 couple climate model cannot recreate this. (The data for Figure 5 is available at the NCAR webpage here. The dataset was prepared for the Trenberth and Stepaniak (2001) paper “Indices of El Niño evolution.”)

Figure 5

To answer the question that heads this section, the CCSM4 coupled climate model does a poor job hindcasting two important aspects of the El Niño-Southern Oscillation.

(For those new to my posts on ENSO, refer to ENSO Indices Do Not Represent The Process Of ENSO Or Its Impact On Global Temperature.It is written at an introductory level and discusses and illustrates with graphs and animations how and why El Niño and La Niña events are responsible for much of the rise in Global Sea Surface Temperatures over the past 30 years, the era of satellite-based Sea Surface Temperature data.)

HOW WELL DOES CCSM4 HINDCAST GLOBAL SURFACE TEMPERATURE ANOMALIES?

Meehl et al (2011) used HADCRUT Global Surface Temperature anomaly data in their Supplementary Information, so we’ll compare the HADCRUT Land plus Sea Surface Temperature anomaly dataset to the Ensemble Members and Mean for the Surface Air Temperatures of the NCAR CCSM4 on a Global basis. Refer to Animation 2. (It’s formatted the same as Animation 1: Observations Versus Six Ensemble Members and Model Mean.) The most obvious differences between the observations and the model outputs are the trends. The modeled trends are about 50% higher than those observed from 1900 to 2005. That’s a major difference. The other obvious difference is the CCSM4 ensemble members and mean do not appear to have the multidecadal component that is so apparent in the Global Surface Temperature anomaly records. Observed Global Surface Temperatures rose from the 1910s to the 1940s, dropped slightly from the 1940s to the 1970s, and then rose again from the 1970s to the late 1990s/early 2000s. The model outputs rise in the latter part of the 20th century, but fail to rise at a rate comparable to the observations during the early part of the 20thCentury and fail to drop from the 1940s to the 1970s.

Animation 2

And to answer the question that heads this section, the CCSM4 coupled climate model does a poor job hindcasting two important and obvious aspects of the Global Surface Temperature anomaly record from 1900 to 2005.

One of the known contributors to the multidecadal variations in Global Surface Temperature anomaly record is the mode of natural variability called the Atlantic Multidecadal Oscillation, or AMO. One might suspect that the AMO does not exist in the CCSM4. Let’s check.

HOW WELL DOES CCSM4 HINDCAST THE ADDITIONAL VARIABILITY IN NORTH ATLANTIC SEA SURFACE TEMPERATURE ANOMALIES?

Note that this is another portion of this post I will redo if and when the CCSM4 Sea Surface Temperature outputs are made available through the KNMI Climate Explorer. Also note that the Atlantic Multidecadal Oscillation is typically represented by detrended North Atlantic Sea Surface Temperature anomalies. But the multidecadal variations are easily visible in the “un-detrended” data, so I have not bothered to detrend it in the following graphs.

As discussed earlier, the Sea Surface Temperature outputs of the NCAR CCSM4 are not yet available through the KNMI Climate Explorer. But Sea Surface Temperature anomalies (detrended) are typically used to illustrate the multidecadal variations in the temperature of the North Atlantic. Again, like the global data, we’ll have to assume that the Marine Air Temperature outputs of the model mimic the Sea Surface Temperatures. The second concern is that land makes up 24% of the area included in the coordinates used for the North Atlantic (0-70N, 80W-0), as shown in Figure 6. The variability of land surface temperature can be different than that of Sea Surface Temperatures.

Figure 6

But as we can see in Figure 7, the instrument observation-based Sea Surface Temperature anomalies of the North Atlantic are tracked quite closely by the observed Land-Plus-Sea Surface Temperature anomalies of the North Atlantic “Plus” (where the “Plus” includes the additional Land Surface Temperature anomaly data encompassed by those coordinates).

Figure 7

The NOAA Earth System Research Laboratory (ESRL) uses a 121-month running-average filter to smooth their Atlantic Multidecadal Oscillation data. Refer to the ESRL AMO webpage. If we smooth the North Atlantic Sea Surface Temperature anomalies and the North Atlantic “Plus” Land+Sea Surface Temperature anomaly observations using the same 121-month filter, Figure 8, we can see the two curves are nearly identical.

Figure 8

So for the purpose of this post, the comparison of Land+Sea Surface Temperature anomalies to the Surface Air Temperature anomalies of the CCSM4 hindcasts will provide a preliminary look at whether there is a multidecadal component in the North Atlantic “Plus” data where one would expect to find it.

Animation 3 compares observed North Atlantic “Plus” Surface (Land+Sea) Temperature anomalies to the modeled Surface Air Temperatures for the 6 individual ensemble members and the ensemble mean. All data have been smoothed with a 121-month filter. Only two of the six ensemble members hint at multidecadal variability, but the frequency and magnitude are not comparable to the observations.

Animation 3

The NCAR CCSM4 coupled climate model appears to do a poor job of hindcasting the multidecadal variability of North Atlantic temperature anomalies.

NOTE ON MULTIDECADAL VARIABILITY OF MODELS

NOTE: Dr. Kevin Trenberth, Distinguished Senior Scientist at NCAR, and a lead author of three IPCC reports, provided a good overview of the models used in the IPCC AR4 released in 2007. Refer to Nature’s Climate Feedback: Predictions of climate post. There he writes:

“None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond. The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe. Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.”

I suspect we’ll see a similar proclamation when AR5 is published.

Kevin Trenberth then tries to explain in the Nature.com article linked above why the differences between the observations and the models do not matter. But they do matter. When a climate change layman (one who makes the effort to look) discovers that the NCAR model CCSM4 hindcasts a global temperature anomaly curve that warms 50% faster than the observed rise from 1900 to 2005 (as shown in Animation 2), they question the model’s ability to project future global temperatures. The perception is, if the hindcast is 50% too high, then the projections must be at least 50% too high. And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.

A NOTE ABOUT MARINE AIR VERSUS SEA SURFACE TEMPERATURES

Sea Surface Temperature anomaly data are used in the GISS, Hadley Centre, and NCDC global temperature anomaly products. Yet as shown earlier, there are Marine Air Temperature datasets available. I used one, the Hadley Centre’s MOHMAT, in Figures 1 and 2. Sea Surface Temperatures are used for a number of reasons, some of which are discussed in Chapter 3 of the IPCC AR4. (24MB). (A word find of “Marine” or “NMAT”, without the quotes, will bring you to the discussions.) One of the reasons Sea Surface Temperature data is preferred is data availability. If you thought the global source data coverage for Sea Surface Temperature data was poor, there are even fewer instrument observations for Marine Air Temperature. Animation 4 illustrates a series of maps that indicate in purple which 5 deg by 5 deg grids contain data. It doesn’t indicate whether there are 30 observations, or 300, or 1 in a given month, just that there is data in a purple grid. The Animation 4 starts with January 1900 and progresses on a decadal basis through January 2000.

Animation 4

As illustrated, Marine Air Temperature observations in the Southern Hemisphere are rare south of 30S before 1950, they’re rare globally for that matter in the first half of the 20thCentury, and data is virtually nonexistent north of 60N in the Northern Hemisphere even through 2000.

Based on that, we’ll limit the comparisons of observed and modeled Marine Air and Sea Surface Temperature data for the global oceans to 30S-60N, and start them in 1950. The end month of December 1999 is dictated by the hindcasts of the NCAR CCSM3, which is the earlier version of that NCAR coupled climate model. Also note that there was only 1 model run for the CCSM3 Sea Surface Temperatures at the KNMI Climate Explorer.

Figure 9 compares the linear trends of a Marine Air Temperature anomaly dataset (MOHMAT) to two Sea Surface Temperature datasets (HADSST2 and HADISST) for the latitudes of 30S to 60N, from January 1950 to December 1999. These are instrument observation-based datasets. As illustrated, the Marine Air Temperature anomalies rise at a rate that is significantly less than the two Sea Surface Temperature anomaly datasets. The linear trend of the Marine Air Temperature anomalies is about 52% of the average of the trends for the two Sea Surface Temperature anomaly datasets.

Figure 9

On the other hand, the modeled ensemble mean for the Marine Air Temperature output of the NCAR CCSM3 (earlier version) has a linear trend that is more than double the trend of the modeled Sea Surface Temperature anomalies. The relationship is backwards. Does this backwards relationship between Sea Surface and Marine Air Temperatures continue to exist in the CCSM4?

Figure 10

CLOSING

The preliminary look at the hindcasts of the NCAR CCSM4 sheds a different light on the model-based papers of Meehl et al (2011) and Stevenson et al (2011). Those papers are based on a coupled climate model that cannot reproduce essential portions of the 20thCentury Surface Temperature observations.

No matter how well the NCAR CCSM4 can simulate certain aspects and processes of global climate, the fact that it cannot reproduce many portions of the instrument temperature record during the 20thCentury emphasizes failings that call into question its ability to project future global or regional climate change.

SOURCE

All observation-based data presented in the post are available through the KNMI Climate Explorer Monthly observationswebpage, with one exception.

The NCAR NINO3.4 data used in Figure 5 is available through the NCAR TNI (Trans-Niño Index) and N3.4 (Niño 3.4 Index) webpage.

The NCAR CCSM4 and CCSM3 model output data are available through the KNMI Climate Explorer also, through the Monthly CMIP5 scenario runs and Monthly CMIP3+ scenario runs webpages, respectively.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Cam_S

Great work!

DirkH

Didn’t know that the hindcasting was so bad. One would have thought they could get at least the past right (maybe they didn’t even try; their mission might have been something else).
Especially telling: the wild fluctuations of peaks between ensemble members (great animation, Bob!).

Theo Goodwin

Bob Tisdale writes:
“Kevin Trenberth then tries to explain in the Nature.com article linked above why the differences between the observations and the models do not matter. But they do matter. When a climate change layman (one who makes the effort to look) discovers that the NCAR model CCSM4 hindcasts a global temperature anomaly curve that warms 50% faster than the observed rise from 1900 to 2005 (as shown in Animation 2), they question the model’s ability to project future global temperatures. The perception is, if the hindcast is 50% too high, then the projections must be at least 50% too high. And when the models don’t resemble the global temperature observations, inasmuch as the models do not have the multidecadal variations of the instrument temperature record, the layman becomes wary. They casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again. Also, the layman can see very clearly that the models have latched onto a portion of the natural warming trends, and that the models have projected upwards from there, continuing the naturally higher multidecadal trend, without considering the potential for a future flattening for two or three or four decades. In short, to the layman, the models appear bogus.”
Amen! Pay special attention to this:
“They [ordinary folk] casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again.”
Trenberth and other modelers have never taken ENSO seriously. They have never treated it as a natural process that has its own integrity; rather, they treat it as statistical noise if they treat it at all. For some time now, Bob Tisdale has been emphasizing what a difference it makes if ENSO is treated as a natural process. His contributions have been very valuable, as is this one.
Why do modelers not treat ENSO as a natural process? That is very simple: they use a radiation only model of temperatures, energy budget, you name it. Natural processes such as ENSO are recognized in their models only to the extent that they can be treated as epiphenomena of heat transfer caused by radiation. (I don’t think Mr. Tisdale has gone as far as me, so this is my claim.) Modelers are weighed down and blinded by their a priori assumptions that are necessary to preserve their radiation only models. Also, those assumptions prevent them from having to do the empirical research that might produce actual physical hypotheses. At this time, there are no actual physical hypotheses which describe the collection of natural regularities that make up ENSO. That is understandable. Climate science is in its infancy. At this time, our government is not supporting research necessary to create such hypotheses. That is not understandable. Climate science must not remain forever in its infancy.
Fantastic article, Mr. Tisdale. Thanks much.

oldseadog

During the 1960s I served on several vessels that were part of the British merchant ship weather reporting scheme. We took weather measurments at M/N, 0600, Noon and 1800 using a Stevensen screen with wet and dry thermometers on the weather bridge wing (35 – 50 feet above the sea surface), a thermometer in the main engine cooling water intake (15 – 30 feet below the sea surface), a weekly barograph in the chart room and a mark one human eyeball. All temperatures were to the nearest degree. These reports were sent away in morse evey day by the Radio Officer, and the record books were sent back to wherever as they became full.
I understand that these observations could be very useful to a meteorologist trying to produce a forecast for the next day, but I don’t know how these reports could give trends accurate to a tenth of a degree.
On another tack, if the sea surface temperature shows a flat line since 1900, and the land temperature shows a rise, could this be due to the fact that there is no Heat Island Effect at sea?

ferdberple

One of the amazing characteristics of the human eye is our ability to spot patterns in an instant, that are incredibly difficult for computers to spot. The result of millions of years of evolution between predator and prey, an arms race between camouflage and pattern recognition.
Thus, to the Layman, there is no difficulty in seeing that the computer hind-casting are not very good. We are not looking at the numbers, we are looking at the patterns. No matter how good the numbers look to the computer, the patterns don’t look right to a human. We can see that they are not the same, even if we don’t know why they are not the same.

Mr. Tinsdale:
As usual I’m impressed to HECK with the depth and breath of your work.
Could I ask ONE simple favor? Let’s take, for example, figure 2. I take it you have the DATA POINTS from which that plot has been made.
Maybe I’m just a “simplistic moron”, but all my years of technical training have me looking at that data set like an Xbar or Rbar chart on some variable…for production. (Which, by the way, obviates the ligitimacy of using solid lines through said data points…as they are not technically a continum, but I digress!) Therefore I say, “I think I should treat this data as a STATISTICAL AGREGGATE.
If I do, I immediately ask, “What is the STANDARD DEVIATION?” Does this data fit any test for normalacy? If it does look as a “normally distributed data set”, what is the “statistical significance” of a say, 0.5, 0.25, 1.25 degree C trend over, 10, 50, 100 years????
If the S.D. is say, 1.4 C (as it would appear to be in figure 2) then a 0.6 degree trend in 60 years, would have something like a 5% confidence in being “statistically significant”. Variation in production wise, this WOULD NOT TRIGGER REMEDIATION.
Why should these SYSTEMS fall outside of that analysis?
THANKS FOR YOUR CONSIDERED ANSWER TO THIS!
Max

In figure 10, The model has twice the slope of the measurement data. Somethings wrong. I trust reality before cyber-reality. Is the discrepancy an intentional subterfuge intended to drive the politics of AGW?
I say, this is just another example of the (previously)covert conspiracy. Cooked up by a Cabal bent on power, not saving the planet.

Kelvin Vaughan

Build a computer model of my car. Now tell me where my car will be on the 27th december!

It would be nice if government buildings were a little simpler and less expensive.

kwik

It is good that they start with coupled models. About time.
If they add effects of low and high clouds, the sun, and transport of seacurrents, with the coriolis effect, and landmasses redirecting seacurrents, different albedos on the landmasses, that the earth is a globe, that earth is moving through space, and so on, and so on, they will agree, in the end, that the task is impossible.
In the mean-time, please do not increase my tax. Can you hear me, Mr Stoltenberg (Prime Minister of Norway)

Douglas DC

I got whacked by a warmist on another site when I mentioned that the warming has stopped,
referred them to our Jedi master of the X-Y axis and data compilation-Bob. When told about
Mr. Tisdale, the Warmist said: “I don’t listen to Deniers or use their websites!!”
In other words-both ears plugged eyes shut and go:”Lalalalalala…”

Chas

Theo, he was right with:
“They [ordinary folk] casually research and discover that natural multidecadal variations have stopped the global warming in the past for 30 years, and they believe it can happen again.”
-Here is my version of Bob’s cummulative Nino3.4 with some added instant Nino3.4 effect
http://i41.tinypic.com/343i6ug.jpg
The only difference to a simple cummulation is that the effect of strong monthly Nino3.4’s (above ~1.2) is 2.5 times larger than those below this the threshold

Hi Bob
I don’t think that many climatologists understand what the North Atlantic is about. In recent months (encouraged by Dr. Curry from Climate etc) I’ve been probing into the data sets related to the N. Atlantic: the SST, NAO etc, even got some data files emailed by Dr. Hurrell’s office (NCAR) which are not available on-line. I found some interesting even important and as yet unknown aspects to the old ocean.
The endless discussions on so called ‘global temperature’ trends may be a bit pointless until it is clearly understood what drives the Atlantic and Pacific Oscillations.

DirkH

ferd berple says:
November 5, 2011 at 12:14 pm
“Thus, to the Layman, there is no difficulty in seeing that the computer hind-casting are not very good. We are not looking at the numbers, we are looking at the patterns. No matter how good the numbers look to the computer, the patterns don’t look right to a human.”
You are right about our pattern recognition abilities, but I can assure you that it would not have been the slightest problem for the climate model programmers to quantify the deviations numerically and automatically, had they chosen to do that.
I mean, computing the sum of the squared differences between two individual model runs should be enough to tell you that these things are indistinguishable from glorified random number generators with a built-in warming bias.
Re our pattern recognition abilities vs the computers abilities: Our visual capabilities have evolved over millions of years. By analysing the visual cortex of I think cats – that is, how the neurons are connected – Marr and Hildreth derived the Mexican Hat operator that is now regularly used for computerized edge detection (or its optimized, faster variants). It forms the basis of high level video compression and you get its benefits every time you watch a Flash video; basically it helps the compression codec determine how objects move across the screen so it can just encode the basic object and a movement vector instead of retransmitting all the pixels for each frame.
I’m giving a link to the German wikipedia page because the English one doesn’t contain much info; enjoy the pictures, they’re good. MPEG4 and DVB are largely German developments so that’s no surprise either.
http://de.wikipedia.org/wiki/Marr-Hildreth-Operator
To analyze the one dimensional time series in the ENSO temperature anomaly, you would use something far simpler, even if you would want to do a very thorough analysis… I mean, hey, these climate modelers don’t even do Fourier analysis for all I know…

Kev-in-Uk

DirkH says:
November 5, 2011 at 11:46 am
On the contrary, DirkH – it is obvious the model hindcastings are bad otherwise I am sure they would have been proclaimed from the highest echelons of the climate voodoo chain!
What I would find interesting would be a developmental sequence of hindcasting that shows how the model has been altered during its development to see if the hindcasting actually has improved at all, since version 1.
In other words, the same as this excellent post by Bob – but done for each of the versions of the model – and then for the differences/changes to the model to be made known. (the tweaks, if you like). If the model tweaks are clearly seen to have no significant effect, then the ‘tweak’ applied was obviously of no value and if it was for say, a single parameter, we could then say that THAT parameter is likely of lower importance….Of course, if the modelers are doing their job correctly, they would already have established which parameters affect the output more or less, and how much the ‘accuracy’ of such parameter variations have on the hindcasting output. (I don’t know if the ‘testing’ and validation of the models will have been published?) But I would have thought that at all stages of model development, the hindcasting would be the ONLY real indicator of model validity – and such validity must be shown over decadal timescales?
I confess to basically ignoring the climate models, because unless the model history and accuracy can be amply demonstrated from start to finish – and remembering things like the forced values in other model code – all we are being presented with is someones ‘assumptions/estimates/whatever’ processed by a computer according to some equally ‘made up’ algorithm(s). Not that there is anything wrong with a made up algorithm per se, one has to start somewhere – but if you then add more and more tweaks onto what is a poor ‘start’ algorithm, it ain’t never gonna work! (Kind of like trying to make a Ferrari from a VW golf without even at first changing the engine! – no matter how freaking good the outside looks like a Ferrari, the inside ‘core’ is still a flippin piddly engine!)
I wonder if anyone (Bob?? anyone??) outside the Team has ever had the opportunity to study and report on a climate model development – with full access to everything, from literally, the first musings through to the final output? I certainly have never heard of anything in this vein – but if anyone knows of such a piece, I’d be more inclined to take a serious interest in climate models.

Ben

The models all have one fundamental problem. There is no evidence yet for the high positive feedbacks they all assume. So any model which has the positive feedback built in so it shows > 1 degree warming over the rest of the century won’t hindcast well. And any model which hindcasts well won’t give the desired 3/4/6 degrees warming forecast.

EternalOptimist

Kevin Vaughan
where your car will be on 27th December is just noise
but in 17 years time, on 27th December, it will be in the third parking slot from the left, in the biggest KFC in Maine
-K. Trenberth

I notice that the National Snow and Ice Data Center are now counts in tens.
“However, each decade, the October extent has started from a lower and lower point, with the record low extent during the 1980s (1984) substantially higher than the record low extent during the 1990s (1999), which in turn is substantially higher than the record low extent during the 2000s (2007).”
http://nsidc.org/arcticseaicenews/
Ii’s just as well the Earth works in years and parts of years. What does the Sun use as a counting system when it obviously does not have ten fingers and toes?

jorgekafkazar

Bob’s work in this area is far from trivial. One of the major flaws in climatology is emphasis on radiation and temperature, rather than heat flux and natural cycles. They interpret transient heat-shedding events as evidence of global warming. ENSO is the flopping whale in their Danish modern living room.
Theo Goodwin says: “…At this time, there are no actual physical hypotheses which describe the collection of natural regularities that make up ENSO….”
Theo, isn’t that somewhat of an over-statement? I believe there are such hypotheses, which, as yet, have not been mathematically described and combined in a way that predicts ENSO fluctuations. Other hypotheses may be necessary to complete the picture. Or it may be too chaotic to describe. I liken it to a grandfather clock with a mouse running up and down the pendulum at odd intervals. There is a cycle to it, but until we know more about the mouse, we’re baffled.
ferd berple says: “One of the amazing characteristics of the human eye is our ability to spot patterns in an instant, that are incredibly difficult for computers to spot. The result of millions of years of evolution between predator and prey, an arms race between camouflage and pattern recognition….”
ferd: The difficulty with that is that evolution has given us a “Predator Recognition System” that errs on the conservative side, it being safer to imagine we see a tiger when there is none, than vice versa. Humans thus also have a tendency to imagine they see other patterns when there are none–Giovanni Schiaparelli’s Martian canals, for one notable example.

Stephen Wilde

“At this time, there are no actual physical hypotheses which describe the collection of natural regularities that make up ENSO.”
Due to ocean dominance in the southern hemisphere the warmth from the oceans pushes the mean position of the ITCZ, with its clouds and rain, north of the equator.
That produces an imbalance of solar input to the oceans either side of the equator.
Over time, the imbalance accumulates until, periodically, the warmer surface water in the Pacific surges across the equator and into the northern oceans in the ENSO SST patterns that we observe.
The timing and intensity of ENSO events being modulated by events in the general global air circulation which also reacts to solar changes.
That constitutes a new hypothesis for the collection of natural regularities that make up ENSO.

bob.
As I’ve explained many times in the past the models will probably never get the timing of natural cycles correct. Trenberth is correct
“None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. ”
In the initial state after spin up you basically have a state that never exists. equillibium.
You then apply external forcings. But all internal forcing is at steady state. Frome this condition
cycle ‘can’ emerge. but the timing wont be correct.
The best you can hope for is getting the following correct
1. Frequency
2. amplitudes
3. Trend over many cycles.
I think you’ve shown that #3 is a problem. 1 and 2 may also be problems. But getting the timing right? that’s never gunna happen unless models are initialized with internal forcings properly estimated. Which means regional forecasting is gunna be hard

Stephen Wilde

“The endless discussions on so called ‘global temperature’ trends may be a bit pointless until it is clearly understood what drives the Atlantic and Pacific Oscillations”
Well initially it is that imbalance between the levels of solar input to the oceans either side of the equator. Eventually the pulses of warmth feed into the Atlantic and then the Arctic Ocean.
Then we need to explain how 500 year cooling or warming trends can arise in the relative strengths of El Nino as against La Nina.
The answer to that is likely solar variability. An active sun gives more zonal jets and/or more poleward climate zones with less global cloudiness and more energy into the oceans for gradually strengthening El Ninos as compared to La Ninas and a gradual rise in global tropospheric temperatures.
The opposite when the sun is less active.
Keep it simple.

bob paglee

The Sun is at the root of all warming, and sunspots deserve more analysis! I still believe sunspots are a key indicator of where global temperatures are headed. There is likely much validity in Prof. Svensmark’s theory about how the Sun’s variable-intensity heliomagnetic field shields Earth from those cloud-seeding inter-galactic cosmic rays (most probably are protons).
They invade Earth’s atmosphere, sometimes more and sometimes less, as the magnetic shielding varies during the Sun’s long-term rising and falling magnetic cycles. Maybe more experiments at CERN, using its incredibly-complex CLOUD chamber, will advance that interesting theory, or they may not.
But my personal conviction is that there is indubitably a relationship between the Sun, its cyclic sunspots, and Earth’s climate. This is evident simply because the chart depicting the International Sunspot Numbers from 1745 to the present clearly shows big variations in peak numbers and shapes of the cyclic curves over that period. Go here to see this yourself: http://spaceweather.com/glossary/sunspotnumber.html
A correlation is obvious between sunspot numbers with climate warming and cooling temperature records since reasonably accurate measurements have been recorded over those last 260 years or so. I believe that the shape of a sunspot cycle’s rise, its peak number, and the shape of its fall can be matched to average climate temperatures for the corresponding time periods that follow.
I have roughly assessed those parameters and find that the rising shape, the peak value, and the stretched-out shape of the last cycle clearly resemble those for the cycles that ended around 1850 or 1880. Compare these time periods with their corresponding climate conditions and you may have an indicator of the near term climate to come.
It may be cooler than you imagined, and far cooler than IPCC’s doctored computer programs have been erroneously predicting.

Stephen Wilde

“But getting the timing right? that’s never gunna happen unless models are initialized with internal forcings properly estimated. Which means regional forecasting is gunna be hard.”
Never mind the models. Look at the real world.
If the surface pressure distribution begins to shift to a more meridional/equatorward pattern as it did around 2000 then if previously it was in a poleward/zonal mode it is clear that warming will have ceased and cooling has begun due to more global cloudiness and less solar energy getting into the oceans.
The regional implications are clear. Each region will get more weather characteristic of the climate zone which has moved closer to it.
The largest changes will occur in the parts of the mid latitudes which find themselves more often on a different side of the mid latitude jes from the side they were more often on previously.
Large changes will also be observed for areas that shift in or out of arid or semi arid zones.
Or closer to or further from the ITCZ.
So unless the air circulation becomes more zonal/poleward again we will continue to see more incursions of both polar and equatorial air masses into the mid latitudes (with the greater extremes that implies) but with a generally cooling trend.
Regional seasonal trends can be readily discerned from that proposition subject only to shorter term chaotic variability.
Thus a less active sun gives colder winters and cooler wetter summers in Western Europe simply because a less active sun gives more meridional/equatorward mid latitutude jets. Each region around the world will be affected in a slightly different way depending on its change in position relative to the nearest climate zone.

phlogiston

Theo Goodwin, Stephen Wilde
“At this time, there are no actual physical hypotheses which describe the collection of natural regularities that make up ENSO. That is understandable.”
Here’s one:
http://wattsupwiththat.com/2011/01/25/is-the-enso-a-nonlinear-oscillator-of-the-belousov-zhabotinsky-reaction-type/

Philip Bradley

Why do modelers not treat ENSO as a natural process? That is very simple: they use a radiation only model of temperatures, energy budget, you name it. Natural processes such as ENSO are recognized in their models only to the extent that they can be treated as epiphenomena of heat transfer caused by radiation. (I don’t think Mr. Tisdale has gone as far as me, so this is my claim.) Modelers are weighed down and blinded by their a priori assumptions that are necessary to preserve their radiation only models. Also, those assumptions prevent them from having to do the empirical research that might produce actual physical hypotheses. At this time, there are no actual physical hypotheses which describe the collection of natural regularities that make up ENSO.
An excellent summary, but I would take it further.
The modellers then tune their radiative forcing theory based model against actual observations (moreorless hindcasting)
However they are constrained by how well they can tune to hindcast by 2 things.
1. its effect on future predictions. You can assume that better hindcasting = lower future warming predictions
2, The radiative forcing model/theory incorporated into the model.
While these 2 things are not independent, IMO the reason they don’t produce more accurate hindcasts is the only way to do so is by violating the first and probably both of these constraints.
What this means (assuming I am correct) is that model predictions of future warming are far too high and it is likely the forcing model/theory is wrong.
I will take the models seriously (as real science) on the day a modeller states exactly what observations will invalidate his model.
Outstanding work as always, Bob.

Chas

Steven Mosher,
The ensemble member 0 shown in animation 3 looks like it has made a reasonable shot at the AMO , would another run carried out using the same starting conditions give the same result? Or does every run (with the same starting conditions) follow a different trajectory?
-Just wondering if the modellers could inch their way to getting the cycles to run in line with reality.

Max Hugoson: Regarding your November 5, 2011 at 12:30 pm comment about data analysis, as you may be aware, I try to make my posts as simple as possible so that persons without technical or statistical backgrounds can understand what I’ve presented. Note I said I try to make them as simple as possible, “try” being the key word, but sometimes the subject matter causes me to stray from that effort. Adding the additional statistical details for some, like yourself, takes me away from my goal of keeping it simple. I do, however, provide links to the source of my data so that individuals can scrutinize the data as much as they want.
With respect to the two datasets in Figure 2, the standard deviation for the NINO3 Marine Air Temperature anomalies is 0.62 deg C, while it’s 0.86 deg C for the NINO3 Sea Surface Temperature anomalies.
Regards

steven mosher says: “As I’ve explained many times in the past the models will probably never get the timing of natural cycles correct.”
And that’s why I included the discussion by Trenberth about initialization. See, I listen to you.
Regards

DirkH

steven mosher says:
November 5, 2011 at 2:27 pm
“I think you’ve shown that #3 is a problem. 1 and 2 may also be problems. But getting the timing right? that’s never gunna happen unless models are initialized with internal forcings properly estimated. Which means regional forecasting is gunna be hard”
When you assume significant positive feedbacks as the IPCC climate scientists do the errors in the regional forecast MUST spread like wildfire to cover the entire globe, throwing your long range forecast (or prediction) completely off the rails.
So here’s the offer.
a) errors in regional forecasts will average themselves out
b) the long term prediction or projection is still correct
c) there are significant positive feedbacks
Pick any two.

Kev-in-Uk says:”I wonder if anyone (Bob?? anyone??) outside the Team has ever had the opportunity to study and report on a climate model development – with full access to everything, from literally, the first musings through to the final output?”
I haven’t seen one, but that doesn’t mean one does not exist.

Ryan Welch

I had always thought that the climate computer models were run with temperature data points until the hindcasts fit the temperature record, and then they would be run into the future, and the climate predictions made from that projection. Why did I think this? Well because I believed the scientists when they said it. And I always thought that their computer projections didn’t accurately predict temperatures because the climate system is so chaotic with many variables, unknowns, and cycles within cycles. But now it appears that the hindcasts were not that accurate in the first place. So, I am now even more skeptical than before. It seems that instead of scientists making honest mistakes due to a lack of information that we may have magicians performing magic tricks to fool the masses.

R. Gates

Of course, climate models are only as good as the initial starting conditions put into them and the completeness of the range of dynamical processes which are included. These dynamical processes can only be included when the they are known and can be quantified with some precision. It is remarkable that the climate models work as well as they do considering the chaotic nature (NOT random nature!) of the processes involved. But climate models are meant to show the dynamical influences and thus the trends, rather than to be used for specific “forecasts” like we might expect from a weather forecast. For example, the models can tell us there could be periods as long as decade or more without any significant increase in global temperatures from anthropogenic forcing because of shorter-term processes like ENSO, volcanoes, etc, but they can’t tell us exactly WHICH decades those might be, as this is not the purpose of the models. Likewise, they can tell us that we will see an ice-free summer Arctic Ocean, but not exactly which year, or even decade this will happen, as this was not ever the intent of models. Because we are dealing with chaotic processes, with specific interactions and feedbacks between multiple (n-body) processes and tipping points involved, as well as black-swan events (like volcanoes) always present, exact spatio-temperal predictions are impossible, and always beyond the scope and intent of models. Being models, they are always undergoing evolution as new dynamical and quantifiable processes are added (i.e. the recent inclusion of EUV effects on stratospheric ozone and circulation).

I second third fourth fifth! the sheer awesomeness of your graphs Mr Tisdale! Made wading through the paper a whole lot more readable for a middling invigilator such as myself.
Cheers!

u.k.(us)

Is it just me, or is one wasting his breath when talking about models, on a Bob Tisdale post ?

Douglas DC says: “When told about Mr. Tisdale, the Warmist said: ‘I don’t listen to Deniers or use their websites!!'”
Let them know I’m a Lukewarmer, not a denier. And many times I find AGW proponents using my graphs in their comments, example:
http://www.skepticalscience.com/Ocean-Heat-Poised-To-Come-Back-And-Haunt-Us-.html#65487
Does the NINO3.4 SST anomaly graph posted at comment 13 look familiar? It should. It’s from my September update:
http://bobtisdale.wordpress.com/2011/10/11/september-2011-sea-surface-temperature-sst-anomaly-update/
So the proponent may be viewing one of my graphs and not even know it.

Theo Goodwin

Kev-in-Uk says:
November 5, 2011 at 1:07 pm
“Of course, if the modelers are doing their job correctly, they would already have established which parameters affect the output more or less, and how much the ‘accuracy’ of such parameter variations have on the hindcasting output.”
This is how actual modelers work. The modelers who work for corporations on important stuff treasure each piece of metadata and mine it for all it is worth. Warmista modelers shy away from metadata because it just shows them how incredibly wrong they are.
“(I don’t know if the ‘testing’ and validation of the models will have been published?) But I would have thought that at all stages of model development, the hindcasting would be the ONLY real indicator of model validity – and such validity must be shown over decadal timescales?”
Hindcasting is the first step. If it won’t hindcast perfectly then it is trash. Try explaining to the Exec Vp of finance that you used a model that cannot reproduce last year’s figures.
Good post.

Philip Bradley

But climate models are meant to show the dynamical influences and thus the trends, rather than to be used for specific “forecasts” like we might expect from a weather forecast.
In which case they aren’t science.
The essence of science is quantified future predictions.
I’m fine with models as investigative tools, but that means their ‘predictions’ have no scientific validity.

Theo Goodwin

jorgekafkazar says:
November 5, 2011 at 2:19 pm
Very good post. I can agree with everything you said. In my opinion, our knowledge of the natural regularities that make up ENSO is rudimentary. Consider the one that is mentioned most often but not yet described in a scientific way: up-welling of cool water at various places during ENSO. Well, where is the up-welling, what is its volume, what is its path, and what causes or controls it? I take it that no one knows.

Theo Goodwin

Stephen Wilde says:
November 5, 2011 at 2:26 pm
Maybe, but what you suggest must be rigorously formulated as physical hypotheses. That requires major empirical research.

Dirk.
Not sure what actual basis you have for your claim, so I’ll assume you are trying to fly by waving your arms

Thanks Bob.
That Trenberth quote is great. It puts it better than I ever could

Theo Goodwin

steven mosher says:
November 5, 2011 at 2:27 pm
‘As I’ve explained many times in the past the models will probably never get the timing of natural cycles correct. Trenberth is correct
“None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. ”’
Finally, you and Trenberth agree with me that models cannot substitute for physical hypotheses that describe natural regularities. You will be giving them up, right?

Theo Goodwin

Ryan Welch says:
November 5, 2011 at 3:37 pm
Could not have said it as well as you have.

Kev-in-Uk

Bob Tisdale says:
November 5, 2011 at 3:27 pm
Ok, I kind of guessed that! – but that kind of also leads to my next query (which I was going to address to Mosh) – which is..
Do any models you know of, use a pre-set underlying temperature base/trend that accounts for the post glacial natural temperature recovery?
For example, ignoring AGW alleged forcings from CO2 – Has anyone constructed a model that say, using known variables – can hindcast correctly up to say the 1940’s (that date being taken as start of the alleged massive rise in CO2). My point being that if a very basic model (incorporating say solar, orbital, etc type variations) can be shown to hindcast up to THAT point – it would be THAT model that would form the basis of all further models with the various additions of so called AGW forcings?
It should be apparent that I know nothing of said climate models – but as an engineer, FE and modeling of things like slope stability or stress states of soils hold no fear for me. When we look at these problems we always start with real data and best available equations, and often back analyse a failure to see if our equations were correct and could have predicted the failure had we analyzed the slope prior to the failure – you get the picture?
So, it would make sense to me (as a lowly engineer) to start with what we suspect we know (i.e. a long term ice age recovery temp trend increase) and try to write a model that reasonably hindcasts that (although of course we are basing temps on proxies) and the other natural variations. From there we can add the traditional forcings we have all come to know and love as mentioned by various AGW proponents.
In the context of climate models therefore, I am curious as to the method of construction of said models – but it would seem there is no one outside the model producers that can know this? Just from a basic scientific method view – this seems crazy, how can a paper using models be peer reviewed if one doesn’t know the intricate model details?

davidmhoffer

I was just thinking about the stark difference between a highly technical and detailed thread like this one, and, for example, one on Michael Mann and the FOIA shenanigans. The latter thread is rife with trolls making any number of unfounded and incorrect remarks ranging from outright and deliberate misinformation to half truths twisted beyond recognition to give a false impression, to simple hijacking of the thread to distract readers from the facts.
Just as I was thinking that threads like this one feature a near absence of trolls of any sort, showing rather conclusively in my mind that they have no interest in debating the facts and the science itself, when along comes R. Gates to falsify my hypothesis. On the one hand, he seems alone (so far) in this thread, while the Michael Mann FOIA thread is thick with them.
R. Gates says;
For example, the models can tell us there could be periods as long as decade or more without any significant increase in global temperatures >>>
OK R. Gates, I’ll bite. Can you point to any models, or any claims by any modelers, that made any such claim BEFORE the current warming hiatus began?

Thank you for the lucid explanation and comparison. Many of us have been saying and writing about the short comings of numeric models. Often but not always in reference models based on assumptions, incomplete information and the inability of static methods to deal with dynamic situations. The true believers will ignore anything that does not bolster their ignorance.

Kev-in-Uk

Theo Goodwin says:
November 5, 2011 at 4:26 pm
Thanks Theo, I think I am fully aware of the way it should be done – just don’t seem to see much proof of it!
In my post, I was recalling the Harry read me file and the ‘fixes’ within code as revealed within climategate – why has this not changed the way the models are produced, worked and published? To my brash way of thinking this is no better than charlatan shayman ‘science’ and I am sick to the back teeth of all the ‘model’ based findings spewed from the various academics etc.
The day someone produces the first fully fledged, BASIC, fully documented and VALIDATED climate model – one that can be shown to hindcast correctly even the very basic of data – that is the day we can start to look at models sincerely – in the meantime, it is black art science in my opinion.
Looking at it another way, take hurricanes and storms for example – a relatively small scale object, that has been carefully monitored, measured and studied for the last few decades. How good is the currently used set of storm prediction models? (I don’t know, but I certainly doubt they are as good as we would like!) – now scale that up to global climate and I’m sure that anyone would accept that if we can’t model local climate features for more than 48 hours (accurately) we sure as eggs come from chickens cannot expect to hindcast global climate! – so why do they bother with GCMs and CCMs? – when there doesn’t seem (AFAIK?) to be a suitably proven ‘base’ model?
I know some will think I am just being negative – but to me it is common sense – using a motor analogy again – if the guy who invented the motorised car was only thinking of a rocket powered re-usable space shuttle or an 18 wheeler with turbocharged engines, etc, he would have been well ahead of his time but would have missed the basic starting point of a chassis, an engine and four wheels!

DirkH

steven mosher says:
November 5, 2011 at 4:35 pm
“Dirk.
Not sure what actual basis you have for your claim, so I’ll assume you are trying to fly by waving your arms”
Do you think that the postulated positive water vapour feedback has a time lag of more than 2 weeks?

DirkH

Trenberth says:
“I postulate that regional climate change is impossible to deal with properly unless the models are initialized.”
Steven Mosher says:
“Which means regional forecasting is gunna be hard”
How do you ever hope to deliver global forecasts if your models fail for any given region? I get accused of hand waving? If logic is handwaving for you I’ll happily continue with it as I like logic. Get yourself accustomed with logical conjunction; the globe is made up of regions.