Questions Policymakers Should Be Asking Climate Scientists Who Receive Government Funding

Even before the study of human-induced global warming became fashionable, tax dollars had funded a major portion of that research. Government organizations continue to supply the vast majority of the moneys for those research efforts. Yet with the tens of billions of dollars expended over the past couple of decades, there has been little increase in our understanding of what the future might bring.

The recent 5th Assessment Report from the Intergovernmental Panel on Climate Change (IPCC) proclaims that global surface temperatures are projected to increase through the year 2100, that sea levels will continue to rise, that in some regions rainfall might increase and in others it will decrease, etc. But those were the same basic messages with the 4th Assessment Report in 2007, and the 3rd Assessment Report in 2001, and the 2nd Assessment Report in 1995. So we’ve received little benefit for all of those tax dollars spent over the past few decades.

Those predictions of the future are based on simulations of climate using numerical computer programs known as climate models. Past and projected factors that are alleged to impact climate on Earth (known as forcings) serve as inputs to the models. Then the models, based on more assumptions made by programmers, crunch a lot of numbers and regurgitate outputs that are representations of what the future might hold in store, with the monumental supposition that the models properly simulate climate.

But it is well known that climate models are flawed, that they do not properly simulate climate metrics that are of interest to policymakers and the public—like surface temperatures, precipitation, sea ice area. And in at least one respect the current generation of climate models performs more poorly than the earlier generation. That is, climate models are getting worse, not better, at simulating Earth’s climate.

With that in mind, the following are sample questions that policymakers should be asking climate scientists and agencies who receive government funding for research into human-induced global warming—along with information to support the questions.

Much of the text in the following is taken from my book Climate Models Fail. I have expanded on many of the discussions here.

1. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DO THE CURRENT GENERATION OF CLIMATE MODELS SIMULATE GLOBAL SURFACE TEMPERATURES MORE POORLY THAN THE PRIOR GENERATION?

Background Information: In the following peer-reviewed papers, CMIP stands for Coupled Model Intercomparison Project, which serves as archives of climate model outputs used by the Intergovernmental Panel on Climate Change (IPCC). The CMIP5 archive was used for the recent IPCC 5th Assessment Report (AR5), while the CMIP3 archive was used for their 4th Assessment Report (AR4) from 2007.

# # #

The reference for this question is Swanson (2013) paper “Emerging Selection Bias in Large-scale Climate Change Simulations.” The preprint version of the paper is here. It is a remarkable paper inasmuch as Swanson explains why the current generation of climate models (CMIP5) is in better agreement among themselves than the previous generation (CMIP3) but, as a result, they perform worse. In other words, the models are growing closer to a consensus answer, but, in doing so, they do not simulate global surface temperatures as well outside of the Arctic.

In the Introduction, Swanson writes (my boldface):

Here we suggest the possibility that a selection bias based upon warming rate is emerging in the enterprise of large-scale climate change simulation. Instead of involving a choice of whether to keep or discard an observation based upon a prior expectation, we hypothesize that this selection bias involves the ‘survival’ of climate models from generation to generation, based upon their warming rate. One plausible explanation suggests this bias originates in the desirable goal to more accurately capture the most spectacular observed manifestation of recent warming, namely the ongoing Arctic amplification of warming and accompanying collapse in Arctic sea ice. However, fidelity to the observed Arctic warming is not equivalent to fidelity in capturing the overall pattern of climate warming. As a result, the current generation (CMIP5) model ensemble mean performs worse at capturing the observed latitudinal structure of warming than the earlier generation (CMIP3) model ensemble. This is despite a marked reduction in the inter-ensemble spread going from CMIP3 to CMIP5, which by itself indicates higher confidence in the consensus solution. In other words, CMIP5 simulations viewed in aggregate appear to provide a more precise, but less accurate picture of actual climate warming compared to CMIP3.

In other words, in an effort to better capture the polar amplification taking place in the Arctic, the current generation of climate models (CMIP5) agrees better among themselves than the prior generation (CMIP3); that is, they are coming together toward the same results so there is less of a spread between climate model outputs. Overall, unfortunately, the CMIP5 models perform worse than the CMIP3 models at simulating global surface temperatures outside of the Arctic.

I’ve read that quote from Swanson (2013) a number of times, because I’ve included it in my book Climate Models Fail and in at least one other blog post. The portion that reads “by itself indicates higher confidence in the consensus solution” stood out for me this time. Is this “marked reduction in the inter-ensemble spread going from CMIP3 to CMIP5” one of the bases for the increased confidence exhibited by the IPCC in their 5th Assessment Report? If so, then the climate scientists associated with the IPCC are fooling themselves, because the current generation of models performs worse than the prior generation.

It’s also remarkable that Swanson (2013) presented why, not if, model performance had grown worse. Apparently, it is common knowledge among the climate science community that CMIP5 models perform worse than the prior generation.

2. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS PROPERLY SIMULATE SEA ICE LOSSES IN THE ARCTIC OCEAN OR SEA ICE GAINS IN THE SOUTHERN OCEAN SURROUNDING ANTARCTICA?

Arctic sea ice loss outpaced the predictions of an earlier generation of climate models (those stored in the CMIP3 archive). Even though that was a very obvious failing of the models, alarmists broadcast that failure at every opportunity, claiming global warming was worse than anticipated. And with the adjustments noted above, the latest generation of models (CMIP5) still has difficulties simulating Arctic sea ice loss. These are discussed in Stroeve, et al. (2012) “Trends in Arctic sea ice extent from CMIP5, CMIP3 and Observations” [paywalled]. The abstract reads (my boldface):

The rapid retreat and thinning of the Arctic sea ice cover over the past several decades is one of the most striking manifestations of global climate change. Previous research revealed that the observed downward trend in September ice extent exceeded simulated trends from most models participating in the World Climate Research Programme Coupled Model Intercomparison Project Phase 3 (CMIP3). We show here that as a group, simulated trends from the models contributing to CMIP5 are more consistent with observations over the satellite era (1979–2011). Trends from most ensemble members and models nevertheless remain smaller than the observed value. Pointing to strong impacts of internal climate variability, 16% of the ensemble member trends over the satellite era are statistically indistinguishable from zero. Results from the CMIP5 models do not appear to have appreciably reduced uncertainty as to when a seasonally ice-free Arctic Ocean will be realized.

The press and global warming enthusiasts have been hyping the loss of Arctic sea ice, but the models simulate it so poorly that the authors concluded that natural variability causes the bulk of the ice loss.

And at the other end of the globe: It is well known that Southern Hemisphere sea ice extent has grown since satellite-based measurements began in 1978. Yet climate models simulate the opposite, a loss, in sea ice there. For the model failings at simulating sea ice extent in the Southern Ocean surrounding Antarctica, we’ll refer to Turner et al. (2013) An Initial Assessment of Antarctic Sea Ice Extent in the CMIP5 Models. Again, the CMIP5 archive is the latest generation of climate models. [Full paper is paywalled.] The Turner et al abstract reads (my boldface and brackets):

We examine the annual cycle and trends in Antarctic sea ice extent (SIE) for 18 Coupled Model Intercomparison Project 5 models that were run with historical forcing for the 1850s to 2005. Many of the models have an annual SIE [sea ice extent] cycle that differs markedly from that observed over the last 30 years. The majority of models have too small a SIE at the minimum in February, while several of the models have less than two thirds of the observed SIE at the September maximum. In contrast to the satellite data, which exhibits a slight increase in SIE, the mean SIE of the models over 1979 – 2005 shows a decrease in each month, with the greatest multi-model mean percentage monthly decline of 13.6% dec-1 in February and the greatest absolute loss of ice of -0.40 × 106 km2 dec-1 in September. The models have very large differences in SIE over 1860 – 2005. Most of the control runs have statistically significant trends in SIE over their full time span and all the models have a negative trend in SIE since the mid-Nineteenth Century. The negative SIE trends in most of the model runs over 1979 – 2005 are a continuation of an earlier decline, suggesting that the processes responsible for the observed increase over the last 30 years are not being simulated correctly.

Basically, according to Turner et al. (2013), the current generation of climate models cannot simulate the annual seasonal cycle in Antarctic sea ice extent, and the climate models show a decrease in Antarctic sea ice extent since 1979, while satellite-based observations show an increase in sea ice extent there.

The closing clause of Turner et al. (2013) is worth repeating and expanding: “…the processes responsible for the observed increase [in Antarctic sea ice extent] over the last 30 years are not being simulated correctly [by the current generation of climate models].”

Obviously, all of the model-based predictions of gloom and doom about sea ice have no basis in the real world.

3. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS PROPERLY SIMULATE ATMOSPHERIC RESPONSES TO EXPLOSIVE VOLCANIC ERUPTIONS?

Background Information: The paper presented in this section refers to the North Atlantic Oscillation, which is a much-studied natural variation in sea level pressure (and interdependent wind patterns) that impacts climate in the Northern Hemisphere. You’ll often hear your local weather forecaster referring to the North Atlantic Oscillation…or its sibling the Arctic Oscillation.

# # #

The atmospheric responses to aerosols ejected by explosive volcanic eruptions have been studied for decades. Many persons are aware that lower troposphere temperatures and surface temperatures cool temporarily following an explosive eruption. This cooling is caused by a kind of short-term umbrella effect, while the volcanic aerosols are blocking sunlight. But there are other well-studied atmospheric responses. Not too surprisingly, climate models poorly simulate all of the atmospheric responses to volcanic eruptions.

These failures were discussed in Driscoll, et al. (2012) “Coupled Model Intercomparison Project Phase 5 (CMIP5) Simulations of Climate Following Volcanic Eruptions”. They wrote in the abstract (my boldface):

The ability of the climate models submitted to the Coupled Model Intercomparison Project 5 (CMIP5) database to simulate the Northern Hemisphere winter climate following a large tropical volcanic eruption is assessed. When sulfate aerosols are produced by volcanic injections into the tropical stratosphere and spread by the stratospheric circulation, it not only causes globally averaged tropospheric cooling but also a localized heating in the lower stratosphere, which can cause major dynamical feedbacks. Observations show a lower stratospheric and surface response during the following one or two Northern Hemisphere (NH) winters, that resembles the positive phase of the North Atlantic Oscillation (NAO). Simulations from 13 CMIP5 models that represent tropical eruptions in the 19th and 20th century are examined, focusing on the large-scale regional impacts associated with the large-scale circulation during the NH winter season. The models generally fail to capture the NH dynamical response following eruptions. They do not sufficiently simulate the observed post-volcanic strengthened NH polar vortex, positive NAO, or NH Eurasian warming pattern, and they tend to overestimate the cooling in the tropical troposphere. The findings are confirmed by a superposed epoch analysis of the NAO index for each model. The study confirms previous similar evaluations and raises concern for the ability of current climate models to simulate the response of a major mode of global circulation variability to external forcings. This is also of concern for the accuracy of geoengineering modeling studies that assess the atmospheric response to stratosphere-injected particles.

In other words, according to Driscoll, et al. (2012), climate models simulate too much temporary cooling in response to volcanic aerosols (that is, they’re too sensitive) and they fail to produce the warming that takes place in the Northern Hemisphere for the first few winters after the eruption.

The final sentence in the abstract of Driscoll, et al. (2012) is also important. Basically, they’re saying that climate models perform so poorly at simulating the atmospheric response to volcanic aerosols that they question the accuracy of climate model studies of geoengineering proposals that shade the Earth by injecting aerosols into the stratosphere.

4. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DO CLIMATE MODELS CONTINUE TO POORLY SIMULATE PRECIPITATION AND DROUGHT?

Background Information: The term downscaling is used in a quote from one of the papers that serve as reference for this question. The UNFCC (United Nations Framework Convention on Climate Change) here defines downscaling as:

…a method for obtaining high-resolution climate or climate change information from relatively coarse-resolution global climate models (GCMs).

In simpler terms, downscaling is a method that theoretically allows “coarse-resolution” global climate models to be used to simulate the regional climate at more finite “high-resolution” levels.

# # #

We’ll provide two papers to serve as references for how poorly climate models simulate precipitation and drought.

The first is Stephens, et al. (2010), “Dreary State of Precipitation in Climate Models.” The title definitely indicates that the paper does not praise the models. The abstract reads:

New, definitive measures of precipitation frequency provided by CloudSat are used to assess the realism of global model precipitation. The character of liquid precipitation (defined as a combination of accumulation, frequency, and intensity) over the global oceans is significantly different from the character of liquid precipitation produced by global weather and climate models. Five different models are used in this comparison representing state-of-the-art weather prediction models, state-of-the-art climate models, and the emerging high-resolution global cloud “resolving” models. The differences between observed and modeled precipitation are larger than can be explained by observational retrieval errors or by the inherent sampling differences between observations and models. We show that the time integrated accumulations of precipitation produced by models closely match observations when globally composited. However, these models produce precipitation approximately twice as often as that observed and make rainfall far too lightly. This finding reinforces similar findings from other studies based on surface accumulated rainfall measurements. The implications of this dreary state of model depiction of the real world are discussed.

In other words, the models seem to do a relatively good job of simulating some factors if the models are compared to data on a global basis, but when the models are looked at regionally, they perform poorly—producing “precipitation approximately twice as often as that observed and make rainfall far too lightly”.

In their closing paragraph, Stephens, et al. (2010) continued with:

The differences in the character of model precipitation are systemic and have a number of important implications for modeling the coupled Earth system…

And:

This implies little skill in precipitation calculated at individual grid points, and thus applications involving downscaling of grid point precipitation to yet even finer-scale resolution has little foundation and relevance to the real Earth system.

In other words, climate model-based predictions of regional changes in precipitation have little basis in reality.

Drought is the topic of the second paper: Taylor, et al. (2012) “Afternoon rain more likely over drier soils”. [Paywalled.] The abstract reads:

Land surface properties, such as vegetation cover and soil moisture, influence the partitioning of radiative energy between latent and sensible heat fluxes in daytime hours. During dry periods, soil-water deficit can limit evapotranspiration, leading to warmer and drier conditions in the lower atmosphere. Soil moisture can influence the development of convective storms through such modifications of low-level atmospheric temperature and humidity, which in turn feeds back on soil moisture. Yet there is considerable uncertainty in how soil moisture affects convective storms across the world, owing to a lack of observational evidence and uncertainty in large-scale models. Here we present a global-scale observational analysis of the coupling between soil moisture and precipitation. We show that across all six continents studied, afternoon rain falls preferentially over soils that are relatively dry compared to the surrounding area. The signal emerges most clearly in the observations over semi-arid regions, where surface fluxes are sensitive to soil moisture, and convective events are frequent. Mechanistically, our results are consistent with enhanced afternoon moist convection driven by increased sensible heat flux over drier soils, and/or mesoscale variability in soil moisture. We find no evidence in our analysis of a positive feedback—that is, a preference for rain over wetter soils—at the spatial scale (50–100 kilometres) studied. In contrast, we find that a positive feedback of soil moisture on simulated precipitation does dominate in six state-of-the-art global weather and climate models—a difference that may contribute to excessive simulated droughts in large-scale models.

That’s a very interesting data-based observation. Taylor, et al. (2012) found that afternoon rains in the real world tended to fall in areas where soils were dry, which would tend to suppress drought. But they also found the opposite occurs in climate models, that models simulated a preference to rain over wetter soils, which would tend to exaggerate droughts.

The presentation of the Taylor, et al. (2012) is here. The summary reads:

  • Afternoon rain favoured over drier soils across globe
  • Large-scale models exhibit opposite behaviour
  • Erroneous depiction of feedback may “lock-in” drought conditions in climate simulations

Roger Pielke Sr. quoted the final sentence of Taylor, et al. (2012) in his post here:

the erroneous sensitivity of convection schemes demonstrated here is likely to contribute to a tendency for large-scale models to `lock-in’ dry conditions, extending droughts unrealistically, and potentially exaggerating the role of soil moisture feedbacks in the climate system.

Bottom line of both papers: Climate models simulate precipitation poorly in a number of ways and, as a result, they exaggerate the length of droughts.

5. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS SIMULATE MULTIDECADAL VARIATIONS IN SEA SURFACE TEMPERATURES?

Background Information: We have discussed the naturally occurring multidecadal variations in the sea surface temperatures of the North Atlantic and North Pacific in a number of blog posts over the years—most recently in the post Multidecadal Variations and Sea Surface Temperature Reconstructions. For further information about the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation and about how climate models fail to simulate them properly, see the post Questions the Media Should Be Asking the IPCC – The Hiatus in Warming, under the heading of “After decades of efforts, why can’t the climate models used by the IPCC simulate coupled ocean-atmosphere processes that cause multidecadal variations in sea surface temperatures and, in turn, land surface air temperatures?”

Further, the combined impacts of the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation on regional United States climate have been identified through data analysis. McCabe et al (2004) Pacific and Atlantic Ocean influences on multidecadal drought frequency in the United States is an examination of the impacts of the Atlantic Multidecadal Oscillation and Pacific Decadal Oscillation on drought in the United States. Full paper is here. Another paper that describes the influence of the Atlantic Multidecadal Oscillation on precipitation in the United States is Enfield et al (2001) The Atlantic multidecadal oscillation and its relation to rainfall and river flows in the continental U.S.

# # #

The first paper that serves as reference for this question is Ruiz-Barradas, et al. (2013) “The Atlantic Multidecadal Oscillation in Twentieth Century Climate Simulations: Uneven Progress from CMIP3 to CMIP5.” The full paper is here. After explaining the sample of climate models used for their study, Ruiz-Barradas, et al. (2013) state in the abstract (my boldface and brackets):

The structure and evolution of the SST [sea surface temperature] anomalies of the AMO [Atlantic Multidecadal Oscillation] have not progressed consistently from the CMIP3 to the CMIP5 models. While the characteristic period of the AMO (smoothed with a binomial filter applied fifty times) is underestimated by the three of the models, the e-folding time of the autocorrelations shows that all models underestimate the 44-year value from observations by almost 50%. Variability of the AMO in the 10–20/70–80 year ranges is overestimated/underestimated in the models and the variability in the 10–20 year range increases in three of the models from the CMIP3 to the CMIP5 versions. Spatial variability and correlation of the AMO regressed precipitation and SST anomalies in summer and fall indicate that models are not up to the task of simulating the AMO impact on the hydroclimate over the neighboring continents. This is in spite of the fact that the spatial variability and correlations in the SST anomalies improve from CMIP3 to CMIP5 versions in two of the models. However, a multi-model mean from a sample of 14 models whose first ensemble was analyzed indicated there were no improvements in the structure of the SST anomalies of the AMO or associated regional precipitation anomalies in summer and fall from CMIP3 to CMIP5 projects.

In other words, climate models do not properly simulate the Atlantic Multidecadal Oscillation in any timeframe, and as a result, they fail to capture its impact on precipitation and drought over the adjacent continents.

At the beginning of their “Concluding Remarks”, Ruiz-Barradas, et al. (2013) explain why it’s important for climate models to be able to accurately simulate the Atlantic Multidecadal Oscillation (my boldface):

Decadal variability in the climate system from the AMO is one of the major sources of variability at this temporal scale that climate models must aim to properly incorporate because its surface climate impact on the neighboring continents. This issue has particular relevance for the current effort on decadal climate prediction experiments been analyzed for the IPCC in preparation for the fifth assessment report. The current analysis does not pretend to investigate into the mechanisms behind the generation of the AMO in model simulations, but to provide evidence of improvements, or lack of them, in the portrayal of spatiotemporal features of the AMO from the previous to the current models participating in the IPCC. If climate models do not incorporate the mechanisms associated to the generation of the AMO (or any other source of decadal variability like the PDO) and in turn incorporate or enhance variability at other frequencies, then the models ability to simulate and predict at decadal time scales will be compromised and so the way they transmit this variability to the surface climate affecting human societies.

I don’t believe I need to translate the final sentence of that quote.

The second paper is Van Haren, et al. (2012) paper “SST and Circulation Trend Biases Cause an Underestimation of European Precipitation Trends.” “SST” stands for Sea Surface Temperature. The authors write (my boldface):

To conclude, modeled atmospheric circulation and SST trends over the past century are significantly different from the observed ones. These mismatches are responsible for a large part of the misrepresentation of precipitation trends in climate models. The causes of the large trends in atmospheric circulation and summer SST are not known.

For further information about how poorly models simulate sea surface temperatures, see the posts:

The third paper is Tung and Zhou (2012) “Using Data to Attribute Episodes of Warming and Cooling in Instrumental Records.” They studied the longest surface air temperature record, Central England Temperature, and also the HADCRUT4 land-plus-ocean surface temperature record. Both contained an Atlantic Multidecadal Oscillation signal. The last sentence of the abstract for Tung and Zhou (2012) reads:

Quantitatively, the recurrent multidecadal internal variability, often underestimated in attribution studies, accounts for 40% of the observed recent 50-y warming trend.

40% is a sizable contribution to global warming from the Atlantic Multidecadal Oscillation—a contribution that is ignored by the models prepared for the IPCC.

The climate science community typically presents the variability of North Pacific sea surface temperatures in a very abstract form known as the Pacific Decadal Oscillation. By doing so they overlook the naturally occurring multidecadal variations in the sea surface temperatures of the North Pacific that are of similar magnitude, but of a slightly different frequency, to those in the North Atlantic. Refer again to the post Multidecadal Variations and Sea Surface Temperature Reconstructions.

Summary for this heading: because climate models cannot simulate the mechanisms associated with the Atlantic Multidecadal Oscillation (and the multidecadal variations in the sea surface temperatures of the North Pacific) or simulate the frequencies and magnitudes at which those variations in sea surface temperatures occur, the models have little value at being able to simulate and predict future climate in the Northern Hemisphere on decadal, multidecadal or longer timeframes.

And last for the discussion of multidecadal variations in surface temperatures, refer to the blog post Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?

6. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY CAN’T CLIMATE MODELS SIMULATE THE BASIC OCEAN-ATMOSPHERE PROCESSES THAT DRIVE EL NIÑO AND LA NIÑA EVENTS?

Background Information: “Phaselock” in the following paper refers to the fact that El Niño and La Niña events are tied to the seasonal cycle.

“Bjerknes feedback,” very basically, means how the tropical Pacific and the atmosphere above it are coupled; i.e., they are interdependent, a change in one causes a change in the other and they provide positive feedback to one another. The existence of this positive “Bjerknes feedback” suggests that El Niño and La Niña events will remain in one mode until something interrupts the positive feedback.

# # #

El Niño and La Niña events are the dominant mode of natural ocean-atmosphere variability on Earth. They have long-term impacts on temperature and precipitation patterns globally. Those long-term impacts have been one of the primary focusses of my research over the past 5 years. An introduction to those findings are presented in the illustrated essay “The Manmade Global Warming Challenge” and they are discussed in minute detail in my ebook Who Turned on the Heat?

That aside, climate models fail to properly simulate the most basic processes that drive El Niño and La Niña events. These basic failings were presented in Bellenger, et al. (2013): “ENSO Representation in Climate Models: From CMIP3 to CMIP5.” Preprint copy is here. The section titled “Discussion and Perspectives” begins:

Much development work for modeling group is still needed in order to correctly represent ENSO, its basic characteristics (amplitude, evolution, timescale, seasonal phaselock…) and fundamental processes such as the Bjerknes and surface fluxes feedbacks.

Bellenger, et al. (2013) was, in many respects, a follow-up paper to Guilyardi, et al. (2009) “Understanding El Niño in Ocean-Atmosphere General Circulation Models: Progress and Challenges.” Guilyardi, et al. (2009) is a detailed overview of the many problems climate models have in their attempts to simulate El Niños and La Niñas. The authors of that study cite more than 100 other papers. The following is the most revealing statement in Guilyardi, et al. (2009):

Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power, et al. 2006).

In other words, because climate models cannot accurately simulate El Niño and La Niña processes, the authors of that paper have little confidence in climate model projections of regional climate or of extreme events.

7. AFTER DECADES OF CLIMATE MODELING EFFORTS, WHY DO CLIMATE MODELS SIMULATE GLOBAL SURFACE TEMPERATURES SO POORLY THAT THEY FAILED TO ANTICIPATE AND PREDICT THE HALT IN GLOBAL WARMING?

Background Information: There are obvious answers to this question. The first, already discussed, is climate models cannot simulate the multidecadal variations in sea surface temperatures or the coupled ocean-atmosphere processes that drive them (the Atlantic Multidecadal Oscillation and the decadal and multidecadal variations in the strength, frequency and duration of El Niño and La Niña events). If climate models had included those multidecadal variations in ocean processes that contribute to or halt the warming of global surface temperatures, then the projected warming would have to be reduced, thereby minimizing any urgency to respond to the mostly naturally occurring global warming. We presented and discussed this in the post Will their Failure to Properly Simulate Multidecadal Variations In Surface Temperatures Be the Downfall of the IPCC?

There is also another major flaw in the climate models that has been avoided by the climate science community. It is the fact that the modelers have to double the observed rate of the warming for global sea surfaces over the past 3+ decades in order to have the modeled warming of global land surfaces fall into line with observations. This was presented and discussed in Models Fail: Land versus Sea Surface Warming Rates. Also see the graphs here.

# # #

There are two papers that serve as reference for the failure of climate models to simulate the halt in the warming of global surface temperatures. The first is Von Storch, et al. (2013) “Can Climate Models Explain the Recent Stagnation in Global Warming?” The one-word answer to the title question of their paper is, “No”. They stated:

However, for the 15-year trend interval corresponding to the latest observation period 1998-2012, only 2% of the 62 CMIP5 and less than 1% of the 189 CMIP3 trend computations are as low as or lower than the observed trend. Applying the standard 5% statistical critical value, we conclude that the model projections are inconsistent with the recent observed global warming over the period 1998- 2012.

According to Von Storch, et al. (2013), both generations of models (CMIP3 and CMIP5) cannot explain the recent slowdown in surface warming. The models show continued surface warming, while observations do not.

The second paper is Fyfe et al. (2013) “Overestimated global warming over the past 20 years.” Fyfe et al. (2013) write:

The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years.

Bottom line: If the climate models cannot be used to explain the current halt in the warming of global surfaces, then they cannot be used to explain the warming that had occurred from the mid-1970s to the late-1990s.

CLOSING

Climate models are portrayed by the media and by political entities like the IPCC as splendid tools for forecasting future climate. The climate science community, however, is well aware that climate models are deeply flawed. Rarely, if ever, are the models’ chronic problems presented to the public and policymakers. In this post, I cited scientific studies that showed that the models are flawed at simulating, for example:

  • The coupled ocean-atmosphere processes of El Niño and La Niña, the world’s largest drivers of global temperature and precipitation.
  • Responses to volcanic eruptions, sometimes powerful enough to counteract the effects of even strong El Niño events.
  • Sea surface temperatures
  • Precipitation—globally or regionally
  • Influence of El Niño events on hurricanes
  • The coupled ocean-atmosphere processes associated with decadal and multidecadal variations is sea surface temperatures, which strongly impact land surface temperatures and precipitation on those timescales.

I’ll close with a quote from Dr. Judith Curry, who is the chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology. Dr. Curry is also the proprietor of the very popular blog Climate Etc. Her recent post, Climate Model Simulations of the AMO, discusses two papers. The first, also discussed in this chapter, is Ruiz-Barradas, et al. (2013) “The Atlantic Multidecadal Oscillation in Twentieth Century Climate Simulations: Uneven Progress from CMIP3 to CMIP5.” The second was the recent scientific study by Von Storch, et al. (2013) “Can Climate Models Explain the Recent Stagnation in Global Warming?” Both of those papers were discussed in this post. Dr. Curry concludes her blog post with the following:

Fitness for purpose?

While some in the blogosphere are arguing that the recent pause or stagnation is coming close to ‘falsifying’ the climate models, this is an incorrect interpretion of these results. The issue is the fitness-for-purpose of the climate models for climate change detection and attribution on decadal to multidecadal timescales. In view of the climate model underestimation of natural internal variability on multidecadal time scales and failure to simulate the recent 15+ years ‘pause’, the issue of fitness for purpose of climate models for detection and attribution on these time scales should be seriously questioned. And these deficiencies should be included in the ‘expert judgment’ on the confidence levels associated with the IPCC’s statements on attribution.

That is, to paraphrase Dr. Curry, it is highly questionable whether climate models are able to tell whether any given indicator of climate change is due to natural or to human causes on decadal to multidecadal timescales.

Apparently, the climate science community, with their much-trumpeted numerical models, is no closer than they were decades ago at being able to detect any human fingerprint in global warming or climate change.

A FINAL QUESTION

In light of all the climate model failings outlined above, there is a final question that policymakers should be asking. But this is a question they need to ask themselves about the climate scientists and agencies they fund. It’s pretty obvious. Should they continue to throw good money at an effort that has provided zero results?

31 thoughts on “Questions Policymakers Should Be Asking Climate Scientists Who Receive Government Funding

  1. Section 3 also raises questions over the cost of SO4 scrubbers on coal-fired power plants as well as futuristic geoengineering proposals.
    Perhaps the costs associated with coal power could be lowered?
    Cheaper energy would be good for all.

  2. Fitness for purpose? …..
    “That is, to paraphrase Dr. Curry, it is highly questionable whether climate models are able to tell whether any given indicator of climate change is due to natural or to human causes on decadal to multidecadal timescales.”

    Perhaps that is not the purpose of the climate models. It seems more likely that the purpose of the climate models is to support the preconceived notion that human activities ARE the reason for climate change/global warming. In that endeavor they are successful, even if they are wrong about the real world.

  3. The answer to all those questions is simply , we simply do not know enough about this ‘settled science’ to make them work . The other answer is that in fact they do ‘work’ in that they regularly produce banner headlines about how its ‘worse then we thought’ and so keep the funding flowing in the area, while give some politicians all the ‘plausible denial’ they need .

    You need to separate ‘work’ in its scientific sense , from ‘work’ when to comes to politicians and advocacy , for the latter it can be total wrong and yet ‘work’

  4. “Not too surprisingly, climate models poorly simulate all of the atmospheric responses to volcanic eruptions.”

    Because volcanoes can have a positive impact on warming: reducing the ozone layer in the stratosphere – continued: causing fewer (in various ways – including aerosols: ULV – phytoplankton etc.) area of low clouds – albedo.

    Is it coincidence that the current warming is correlated with the three strong stratospheric volcanic eruptions?

    Needless to say that the positive impact, de facto does not exist in the models – there volcanoes operate mainly (or exclusively) only as the cooling medium of climate.

  5. Do these squandered billions include genuine researchers who have merely put “climate change” in their grant request so they can get funding?
    Do they include all those wonderful Argo floats and satellites that Willis loves playing with?
    Surely all the money cannot be quietly trousered by scallywags, some of it must advance the cause of science.

  6. Robin Hewitt says, January 2, 2014 at 6:41 am

    Do these squandered billions include genuine researchers who have merely put “climate change” in their grant request so they can get funding?

    Interesting point. There is a hidden opportunity cost here. Research that provides promising results should attract more funding thus advancing science quickly (until that well dries up).

    But if things are marked “Climate Change” instead of the real focus of the research then the funds won’t be directed efficiently.

    This is another cost of warping the scientific process.

  7. This piece should be required reading for the incumbent President of the United States, all 435 members of the House of Representatives, every member of the Senate, every state Governor, every state legislator and every member of the 4th Estate (i.e., the media).

    Regretfully, I fear that a substantial proportion of that group is functionally illiterate.

  8. Bob Tisdale says:

    “The recent 5th Assessment Report from the Intergovernmental Panel on Climate Change (IPCC) proclaims that global surface temperatures are projected to increase through the year 2100, that sea levels will continue to rise, that in some regions rainfall might increase and in others it will decrease, etc.”

    In the ‘Prediction” Game, this is known as “THE BROKEN CLOCK STRATEGY”.

  9. Simple answer – many Government policies are decided on the results and predictions of climate models. Until the policies change, the models won’t get any more accurate. The policies drive the models.

  10. @Dave_G
    Good answer. It is a problem of signaling for continued funds to do the wrong things from deaf and dumb agencies and their handlers. In a system where the only reforms tend to come from crisis, there is little hope of enlightened reform springing up.

  11. The IPCC “scientific” position is that multiple weak indicators is as good as one strong indicator. It is a principle that sends many an innocent to prison or, as Stalin would have preferred, to the firing squad.

  12. “Should they continue to throw good money at an effort that has provided zero results?”
    As long as there is free money with no accontability for results , the world of ” it is worse than we thought, plese give us more money” will continue . There appears to be little desire to stop this gravy train until the governments have a major financial crisis as we have already seen in some European Nations . The US government has trouble balancing it books and yet it gives some $21 BILLION free money annually for global warming related activities . On a world scale this amounts to $1 billion/day business . United Nations want their share of the free money at $100 billion/ year.

  13. Any Government that claims fiscal responsibility and proper accounting should be asking these questions and cutting funding. The Canadian model has completely failed. All Canucks should be writing their MP, the Minister, the Prime Minister, the Finance Minister, and the Auditor General of Canada. And I would suggest comparable government representatives in your country. The time of waste is over, time to demand real science and real results.

  14. Thanks, Bob.
    A fine summary of the state of climate models as described by the modellers themselves – damned by their own words.
    Now, how do we get our politicians to read this and all the other essays written on so many sceptic blogs exposing the CAGW scam.
    I’m afraid that only persistent, long term temperature fall will convince them – at least ten more years – by which time the energy supply of several Western economies will be ruined for a generation.

  15. ed mister jones says: broken clock theory

    Excellent analogy, as they track the normal variations of the global climate they will only point out the times when “the clock is right” (twice per day) hence the silence on the 17 year temperature hold. They will be quiet on this years record low temps.

  16. How could the climate models possibly be accurate? Models reduce large areas of the Earth to a single point, the grid cell. Think about your own current grid cell, 100 km in horizontal resolution, 1 km vertical and 10 minute time step. What are the values of the physical quantities like temperature, pressure, wind speed, cloudiness, albedo, flowing radiation energies, CO2, humidity, precipitation and so on? One single value, please.

    Where do you find the observations that give the initial values from the bottom of the oceans to the limits of the space. What are the observed right values 10 minutes later, 30 years later and 50 years earlier?

    Climate models try to take smaller than grid things like hurricanes and clouds into account by parametrization. It is not physics, it is curve fitting. What could e.g. 76% cloudiness mean? One thunderstorm in a corner of the grid or thin cumulus clouds covering the whole area?

    Models try to calculate solutions to complex problems like heat transfer using numerical methods instead of using error-less physics based equations.

    Ensembles average the values of individual models and runs but systematic errors in parametrization, initial values, numerical mathematics remain. It is not useful to project that the global average temperature in 2100 will be between 15 to 23 C. You must get all the variables right at the same time. This includes among others the sociological model projecting CO2, ocean heat content, rains and humidity at the top of tropics, the famous hot spot.

    It might be that models that are selected to the official ensemble are the ones that fit the curves of the past the best and project a politically correct future. Fitting the parametrization based on the surface station temperatures might not be the right way to do it.

  17. Seems you won’t be getting any comments on this excellent article by Bob from the CAGW peeps. Nothing like trying to defend the indefensible and looking like a fool. Maybe Mosher wants to chime in on why all of the billions have been well spent and why we should keep wasting hundreds of billions or even trillions generating this crap? Billions spent every year on flawed computer models that get worse with every new generation based on comparison to real world data, When it comes to research, it would be far better spent on FUSION energy research, the real solution to our energy needs of the future.

  18. Thanks, Bob.

    This first paragraph will seem mis-directed:
    I have read some about great scientific disagreements of the past. I’ll mention just one. Since 1974 I have lived in the vicinity of the landscape formed by the last glacial advance and the floods that followed – Washington State’s glacial valleys and channeled scablands. Mostly, the debate about this was conducted during meetings of geologists and related researchers. Published papers and anecdotal reports are also available. The main characters moved location and interests as their situations dictated. Grant money as seen in “climate science” was not available. At the level of governments, there seems to have been no involvement whereby the science being investigated was used by or influenced policy.

    I think you are doing an admirable service by documenting this thing called “climate science” and questioning the link to policy makers and government funding.

    Whether or not efforts such as this post will have an immediacy of impact is not assured. I wrote a comment on “No Trick Zone” that went about like this:
    It has been a quarter of a century since the idea of GW burst upon the public. Andrew Revkin did a 20 year review at Dot Earth on June 23, 2008. Hansen’s Senate panel appearance was in 1988. Consider that high school seniors, say 18 at the time, are now in their mid-40s. Those that graduated from college with political, social, or general studies majors never understood the science. They simply entered their career paths with the guilt instilled in them for their part in destroying the environment.
    The information now being presented – your (Bob T) posts are some of the best – will appear as strange to the CAGW crowd as the information showing the existence of the Higgs boson. Those 40 year old bureaucrats, elected folks, and activists will likely never understand the issues of “climate science” and will retire believing what they now believe. Younger ones will likely be marginalized if Earth noticeably cools over the next couple of decades.

    Meanwhile, the US, UN, and the EU want to continue the remaking of how things work. Our leaders are the true Ship of Fools. [Big ships don’t stop or turn quickly.]

    Again, Thanks.

  19. Perhaps the most pertinent question to raise is this one:

    ‘Given that high quality, consistent, rigorous and globally uniform temperature data for the earth’s surface, the ocean’s surfaces, the atmosphere, the troposphere and the stratosphere have only been available for 35 years using satellites and 60-odd years using radiosonde balloons, would policy makers like to ask whether the time series of data is yet of sufficient length to justify even attempting to model a system as complex, open, dynamic, chaotic and multivariate as the earth’s climate?’

    Do we really know what the natural cycles of arctic ice extent are over periods of 1000 years, rather than 40 years? Despite tantalising hints to the contrary in the logs of shipping explorers, the apparent ‘concensus’ is that the melt we are currently seeing is quite unique. Is it?

    Do we really know what the normal variability of sunspot cycle amplitudes is, not over 350 years but over 2000 years is?? Are we therefore even in a position to define what ‘natural variability’ might even look like over such timescales?

    Do we really know what the effects of global oceanic circulation currents are when we do not even yet know how such global circulation currents change over centuries?

    We really are at a very early stage of detailed understanding of how climate works in an integrated sense.

    Should we really find it surprising at all that we cannot yet develop climate models which simulate the natural system well??

  20. The only question politicians care about is are the recipients of the money they steal and hand out promoting stealing ever more of our money, and the answer is overwhelmingly yes.

  21. Excellent work, Mr. Tisdale. Your criticisms are as sharp as your accounts of ENSO and similar phenomena. Your questions should be posted and discussed all over the internet. If we had an MSM, your questions would be there too. I thank God that this day has arrived.

  22. totuudenhenki says:
    January 2, 2014 at 8:45 am

    “Where do you find the observations that give the initial values from the bottom of the oceans to the limits of the space. What are the observed right values 10 minutes later, 30 years later and 50 years earlier?”

    Excellent post.

    A fundamental assumption of climate scientists is that any two temperature measurements are comparable. Without such assumptions, they would not be able to model the atmospheric conditions in a room.

  23. I believe this is far to technical to be absorbed by policy makers.

    It needs an executive summary, bullet style, to access the intellectual level and time pressure on the worlds policy makers

  24. With all due respect, Bob, these questions are not suitable for policymakers. They’re too complicated and technical for the average policymaker to fully appreciate the significance and nuances of the question or understand the answer well enough to be able to discern truth from BS.

    Policymakers need to ask basic questions like:

    In a geologic sense of time is the current temperature and recent trend (century scale) unusual? Follow ups: In the global average temperature decent from the Eemian to the last glacial maximum were there 100 year and even thousand year periods of warming of about 1 degree Celsius? Have there been any observations outside of what one might expect during this portion of the Milankovitch cycle?

    Why hasn’t the estimated range of climate sensitivity to 2xCO2 been significantly narrowed in 20+ years of intense research? Follow up: Are we certain of the magnitude of the warming or not?

    Why are we continually being bombarded with evidence of warming being presented as evidence of catastrophic anthropogenic warming? Follow up: Are climate scientists really that unfamiliar with logic?

    Has earlier projections of patterns of warming expected through enhanced GHE been observed? Follow ups: Where’s the tropical tropospheric hot spot? Why has the stratosphere stopped cooling? Has RH and absolute humidity trended as expected? (This is probably too complicated an issue as well, look how well they can spin and statistical manipulate facts around to at least argue the points well enough that a non-technical observer would probably remain undecided.)

    If more heat is going into the deeper ocean than originally projected then would that not in turn necessarily expand the time frame for which any realized warming could occur from original projections? Follow up: Does adding mass (or including more mass in the calculation) especially mass with high heat capacity to a system being heated necessarily increase the time it takes a given heat input to warm the system?

    Are you really 95% confident based on the circumstantial evidence of the brief correlation between atmospheric CO2 concentration and GAST, the fact that CO2 is a GHG (with limited absorption capacity), the broad understanding that adding CO2 to the atmosphere should cause some warming, and the course estimations from computer models that burning fossil fuels will necessarily result in (or at least has a high potential to result in) warming that is unacceptably dangerous to civilization and ecosystems? Follow up: Could you describe unacceptably dangerous warming in comparison to the risk of living in Tokyo or San Francisco wrt to seismicactivity?

  25. Are the “rightest” GCMs (2? 3?) the ones with the lowest sensitivity? What is the likelihood the same models will be the most correct in 5, 10, and 90 years? Or next year?

  26. @Bob Tisdale: This is a very good summary of where we are today. You’ve presented your questions and answers in an undeniable way –and provide a “summary for dummies” sections using “In other words” to explain the conclusive answers to each question.

    After reading the first of two books I bought from you “Who turned on the Heat” I can follow your work much more easily. You’ve given me confidence in my understanding of the ocean’s affect on climate!

    It will be fascinating to see what happens to the ENSO process as the sun continues its slump over the next 2 to 10 years and longer into the future if cycle 25 indeed is a dud. I’m excited for you and what you come up with next Bob!

    Well done. I wish there were more posts on this post!

Comments are closed.