Northern Hemisphere snow cover trends (1967-2018): A comparison between climate models and observations

According to the climate models, snow cover should have steadily decreased for all four seasons. However, the observations show that only spring and summer demonstrates a long-term decrease.

Ronan Connolly, Michael Connolly, Willie Soon, David R. Legates, Rodolfo Gustavo Cionco , and V. M. Velasco Herrera

Abstract: Observed changes in Northern Hemisphere snow cover from satellite records were compared to those predicted by all available CMIP5 climate models over the duration of the satellite records, i.e., 1967-2018. A total of 196 climate model runs were analysed (taken from 24 climate models). Separate analyses were conducted for the annual averages and for each of the seasons (winter, spring, summer and autumn/fall). A longer record (1922-2018) for the spring season which combines ground-based measurements with satellite measurements was also compared to the model outputs. The climate models were found to poorly explain the observed trends. While the models suggest snow cover should have steadily decreased for all four seasons, only spring and summer have exhibited a long-term decrease and the pattern of the observed decreases for these seasons was quite different from the model predictions. Moreover, the observed trends for autumn and winter suggest a long-term increase, although these trends were not statistically significant. Possible explanations for the poor performance of the climate models are discussed.

1. Introduction

Snow cover represents one of the main components of the cryosphere, along with sea ice [1], permafrost [2] and the various ice sheets and glaciers [3,4]. Seasonal snow cover also represents a major component of the hydrological cycle in mid- and high-latitudes [5]. Snow cover also supports a large winter outdoor recreation industry, while snowmelt is an important source of water for many societies. As a result, Sturm et al. (2017) estimate that the financial value of snow to human society is of the order of trillions of dollars [6]. Boelman et al. (2019) further stress that understanding changes and trends in snow cover is also important for the study of the wildlife communities of ecosystems that experience seasonal snow [7].

Temporal changes in snow cover are an important part of global climate change for at least two reasons. First, total snow cover is widely considered a key indicator of climate change [5-10]. Climate models from the 1970s to the present have consistently predicted that human-caused global warming from increasing atmospheric greenhouse gas concentrations should be causing a significant and continual decline in total snow cover [8-18]. Second, changes in snow cover can further contribute to global climate change by altering the Earth’s surface albedo (i.e., the fraction of incoming sunlight reflected to space), and also because snow cover partially insulates the ground below it [8-14,16-21].

Weekly satellite-derived observations for the Northern Hemisphere are available from November 1966 to the present (historical snow cover data provide less spatial and temporal coverage for the Southern Hemisphere). These estimates are a collaborative effort between NOAA and the Rutgers University Global Snow Lab [9-10,19,22-27]. This dataset (henceforth, termed the “Rutgers dataset”) represents the longest running satellite-based record of any environmental variable, and it is widely used by the climate science community [9-10,19,22-27].

Various ground-based measurements of local snowfall and snow cover extend prior to the pre-satellite era. Brown and Robinson (2011) were able to combine these data sources with the Rutgers dataset to extend estimates of Northern Hemisphere snow cover for March and April back to 1922 (and to 1915 for North America) [28]. By averaging the two monthly estimates, they derived a combined “spring” estimate. Along with the Rutgers dataset, this second observational dataset will be considered briefly in this paper.

In the 1970s and early 1980s, trends in satellite-derived estimates of Northern Hemisphere snow cover were a cause for consternation within the scientific community. Although climate models had predicted that global (and hemispheric) snow cover should have decreased from human-caused “global warming” due to increasing atmospheric carbon dioxide concentrations [8], Northern Hemisphere snow cover had actually increased since at least the start of the record. At the time, this led to some skepticism about the validity of the climate models, e.g. [9,10,24].

In the late-1980s, average snow cover finally began to decrease. Although Robinson and Dewey (1990) cautioned that it was still too “premature to infer an anthropogenic cause for the recent decrease in hemispheric snow cover” [10], this reversal of trends provided a renewed confidence in the climate models and the human-caused global warming theory (which was by now generating considerable interest from the public).

Still, as time progressed it became increasingly apparent that the observed changes in snow cover were quite different from what the models had predicted. While the models had predicted declines for all four seasons, the observed decrease in snow cover was largely confined to the spring and summer months, and not the autumn/fall or winter [24-26].

Moreover, the decrease in spring and summer largely occurred in a single step-change decrease in the late-1980s. That is, the spring and summer averages remained relatively constant until the late-1980s, then dropped during the late-1980s, and have remained relatively constant at that lower value since [24-26], although a further decrease in summer extent appears to have occurred over the last decade [26]. The climate model-predicted decrease from human-induced warming consisted of a continuous trend rather than a single step-change. Although Foster et al. (2008) were careful not to rule out the possibility that some of the decrease might “be at least partially explained by human-induced warming” [26] (p155), they argued that this step-like change seemed more consistent with a late-1980s shift in the Arctic Oscillation, or some other climatic regime shift [25].

Nonetheless, several studies noted that when calculating a linear trend for the observed spring (or summer) values over a time period that covered the late 1980s step-change, e.g., 1967-2012, the decline introduced a “negative trend”, and that the continual decline predicted by the climate models also implied a “negative trend” for those seasons (albeit, also for the other seasons). Moreover, since the observed decrease occurred for both spring and summer, the annual averages also implied a net negative trend [11,13-18,29-30]. Also, it was noted that the climate models did at least qualitatively replicate the overall annual cycle, i.e., the fact that snow cover increases in the autumn and winter and decreases in the spring and summer [11,14,16].

By comparing linear trends for the spring season over a fixed time-period, it could be argued that some agreement existed between the climate model predictions and the observations [11,13-18,29-30]. In particular, the widely-cited Intergovernmental Panel on Climate Change (henceforth, IPCC)’s Working Group 1 Fifth Assessment Report (2013) [29] used the observed negative 1967-2012 trend in Northern Hemisphere spring snow cover extent as one of its main arguments for concluding that the global warming since the 1950s is very unusual, “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased” [29, emphasis added in bold underline] (p2).

Some studies have gone further and used climate model-based “detection and attribution” studies to argue that the decline in spring snow cover cannot be explained in terms of natural variability and must be due to human-induced global warming [15,17]. Essentially, these studies compared the output of the CMIP5 climate models [31] using “natural forcings only” and those using “natural and anthropogenic forcings” to the observed spring trends and were only able to simulate a negative trend using the latter [15,17]. This is similar to the approach the IPCC 5th Assessment Report used for concluding that, “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.” [28] (p15). Soon et al. (2015), however, showed that the CMIP5 climate modelling groups only considered a small subset of the available estimates of solar variability for their “natural forcings” and that if other (equally plausible) estimates had been considered, much (or possibly all) of the post-1950s warming could have been explained in terms of natural climate change [32].

Focusing on linear trends still highlighted puzzling discrepancies between the model predictions and observations. First, the observed negative linear trends for spring are considerably larger in magnitude than the model-predicted trends for spring [11,14-18,30]. Although widely-noted, this does not itself appear to have generated much criticism of the reliability of the climate models – perhaps because the signs of the trends are in this case the same. A second discrepancy is that snow cover increased for all seasons in the Tibetan Plateau region [33-35], and more generally, China [35], while the climate models predicted this region should have experienced a decrease in snow cover.

However, a third major discrepancy is more controversial and has led to considerable debate. In recent years, many regions in the Northern Hemisphere have experienced heavy snow storms in the autumn or winter (e.g., the winters of 2009/2010 and 2010/2011). Partly for this reason, linear trends for autumn and winter suggest an increase in snow cover [16,24,36-38], or at least that snow cover has remained reasonably constant [39]. This is in sharp contrast to the climate models which (as explained above) have predicted a continual decrease for all four seasons.

Räisänen (2008) argued that the response of the total winter snow fall to global warming is non-trivial because increases in temperature tend to increase precipitation, and if the air temperature remains low enough for the precipitation to be in the form of snow, this can lead to an increase in the total snow fall [12]. Under a period of warming, the models predict an increase in snowfall for some regions (where the average temperature is below about -20°C), but a decrease in other regions [12-13]. However, the regions where the models predict an increase in snowfall are regions that are already snow-covered. Therefore, this cannot explain the observed increase in winter snow cover.

Cohen et al. (2012) suggested that the climate models might be missing important climatic processes – specifically, that a decrease in summer Arctic sea ice could contribute to the observed increase in October snow cover for Eurasia, and that this, in turn, alters the Arctic Oscillation, leading to colder winters (and a greater winter snow cover) [36]. Liu et al. (2012) have made similar arguments [37]. Brown and Derksen (2013) argue that a bias might exist in the Eurasian October snow cover data and that the large increasing trend for October is simply an artefact of bad observations [40]. While Mudryk et al. (2014) offer support to this argument, they note that the problem largely was confined to October in the Eurasian region [16]. Moreover, in a follow-up to the Cohen et al. (2012) study which specifically avoided the use of the October trends, Furtado et al. (2015) presented further evidence that the CMIP5 climate models systematically fail to “capture well the observed snow-[Arctic Oscillation] relationship” [41]. More recently, Hanna et al. (2018) have argued that the CMIP5 climate models also fail to capture the summer Greenland high-pressure blocking phenomenon [42].

Although several studies have compared the observed spring snow cover trends to climate model predictions [11,14-18,30], little direct comparison of trends for other seasons has been conducted [16]. Moreover, most of the comparisons have focused exclusively on linear trends, while the observed trends often show distinctly non-linear fluctuations from year-to-year. Therefore, in this paper, we directly compare the observed Northern Hemisphere snow cover trends for all four seasons (and annual trends) to the CMIP5 climate model hindcasted trends. Our analysis compares both the linear trends (obtained by linear least squares fitting) and the time series themselves.

2. Materials and Methods

Monthly snow cover data for the Northern Hemisphere (in square kilometres) were downloaded from the Rutgers University Global Snow Lab website, https://climate.rutgers.edu/ in January 2019. This dataset begins in November 1966 and provides an almost complete time series to January 2019, although some months in the early portion of the record (i.e., July 1968, June-October 1969, and July-September 1971) have no data. We estimated the values for these missing months as the mean of the equivalent monthly values in the year before and after the missing year.

Global Climate Model output from the CMIP5 model runs were obtained from the KNMI Climate Explorer website, https://climexp.knmi.nl/ in January 2019 using a Northern Hemisphere land mask. Monthly Northern Hemisphere snow cover from this source were reported as either a percentage or a fraction of the total land area (depending on the format used by each modelling group). We converted these values into square kilometres by multiplying by the total Northern Hemisphere land area (1.00281 × 108 km2).

For our comparisons with the Rutgers dataset, we are interested in the modelled output over the same period for which we have observations, i.e., 1967-2018. Each of the submitted CMIP5 model runs covers a much longer period, i.e., 1861-2100. The first part of each run (1861-2005) is generated using “historical forcings”. However, the rest of the run (2006-2100) uses one of four Representative Concentration Pathway (RCP) scenarios [43]: “RCP2.6”, “RCP4.5”, “RCP6.0” and “RCP8.5”. These scenarios offer different projections of how greenhouse concentrations could increase to 2100 and are named according to the estimated extra “Radiative Forcing” (in W/m2) this is projected to have added by 2100. Although these scenarios substantially diverge during the 21st century, the differences between the projections by 2018 are relatively modest. For example, if we consider the 1967-2016 annual trend (a metric which will be discussed in Section 3.1), the multi-model mean using all 196 model runs (across all four scenarios) is -29,800 ± 1,600 km2/year. Meanwhile, if we calculate the multi-model means separately for each scenario, the results are: RCP2.6 = -30,600 ± 3,600 km2/year; RCP4.5 = -30,200 ± 2,500 km2/year; RCP6.0 = -26,500 ± 3,400 km2/year; RCP8.5 = -30,700 ± 3,200 km2/year. In other words, the trends up to 2016 are comparable across all scenarios. On the other hand, as will be discussed in Section 3.1, the exact “internal variability” differs for each scenario run. For this reason, we treat each of the scenario runs as a separate run.

As can be seen from Table 1, 15 modelling groups (from 9 countries) contributed snow cover estimates to the CMIP5 project as part of their model output. Some modelling groups used more than one model version (and some tested different “physics versions”) and/or multiple runs for each of the four scenarios. We note that the Climate Explorer website did not provide snow cover results for some of the CMIP5 modelling groups (e.g., the UK’s Hadley Centre) even though the website provides output for these models for other parameters, e.g., surface air temperature. It is possible that some of these missing models either did not calculate or did not submit snow cover output to the CMIP5 project. At any rate, we analysed output from 196 model runs (from 24 climate models).

Table 1. All CMIP5 climate model runs used for the analysis in this article. These correspond to the models that provided snow cover estimates.

Modelling group Country Model name RCP2.6 RCP4.5 RCP6.0 RCP8.5 Total
Beijing Climate Center China bcc-csm1-1 1 1 0 1 3
bcc-csm1-1-m 1 1 1 0 3
Beijing Normal University China BNU-ESM 1 1 0 1 3
Canadian Centre for Climate Modelling and Analysis Canada CanESM2 5 5 0 5 15
National Center for Atmospheric Research USA CCSM4 6 6 6 6 24
Community Earth System Model Contributors USA CESM1-BGC 0 1 0 1 2
CESM1-CAM5 2 2 2 3 9
Centre National de Recherches Météorologiques France CNRM-CM5 1 1 0 5 7
Commonwealth Scientific and Industrial Research Organisation Australia CSIRO-Mk3-6-0 10 10 10 10 40
Laboratory of Numerical Modelling for Atmospheric Sciences and Geophysical Fluid Dynamics (LASG) China FGOALS-g2 1 1 0 1 3
First Institute of Oceanography China FIO-ESM 3 3 3 3 12
NASA Goddard Institute for Space Studies USA GISS-E2-H 3 15 3 3 24
GISS-E2-H-CC 0 1 0 0 1
GISS-E2-R 0 1 0 0 1
GISS-E2-R-CC 0 1 0 0 1
Institute for Numerical Mathematics Russia inmcm4 0 1 0 1 2
Japan Agency for Marine-Earth Science and Technology Japan MIROC5 3 3 3 3 12
MIROC-ESM 1 1 1 1 4
MIROC-ESM-CHEM 1 1 1 1 4
Max Planck Institute Germany MPI-ESM-LR 3 3 0 3 9
MPI-ESM-MR 1 3 0 1 5
Meteorological Research Institute Japan MRI-CGCM3 1 1 1 1 4
Norwegian Climate Centre Norway NorESM1-M 1 1 1 1 4
NorESM1-ME 1 1 1 1 4
15 modelling groups 9 countries 24 models 46 65 33 52 196

Rather than studying the trends for all twelve months separately, we consider just the four Northern Hemisphere seasons: winter (December, January and February), spring (March, April and May), summer (June, July and August) and autumn/fall (September, October and November). The winter averages for a given year use the December from the preceding calendar year. We also discuss the annual averages (i.e., January to December). However, Brown and Robinson’s (2011) “spring” dataset consists of the average over only two months (March and April). For our comparison with the Brown and Robinson (2011) dataset, we use March/April averages only.

The Brown and Robinson (2011) dataset was downloaded from the supplementary information provided with their paper [28]. However, their time series excludes Greenland (due to a shortage of pre-satellite era measurements) and only extends to 2010. As can be seen from Figure 1(a), the relationship between the two time-series is highly linear (r2=0.994). Therefore, for comparison with the climate model output, we rescaled their time series so their values are commensurate with those of the Rutgers dataset for the period of overlap (i.e., 1967-2010) and the uncertainty envelope provided with the dataset (95% confidence interval) was rescaled accordingly. The Brown and Robinson series then was updated with the Rutgers March/April averages (2011-2018) – see Figure 1(b). Error bars for the updated period were assumed to have the same values as the average of the error bars for the final five years of the Brown and Robinson series (i.e., 2006-2010).

fig1

Figure 1. (a) Relationship between the Brown & Robinson (2011) dataset (x-axis) and the Rutgers dataset (y-axis) for the average March/April snow cover over the period of common overlap (1967-2010). (b) Comparison of the rescaled version of Brown & Robinson (2011) with the Rutgers dataset for average March/April snow cover.

3. Results

3.1. Comparison of CMIP5 climate modelled snow cover trends to the satellite-derived Rutgers dataset.

Observed Northern Hemisphere snow cover (in km2) for all four seasons (Figure 2a-d) and the annual average (e) to the equivalent values for each of the 196 CMIP5 runs, averaged over the 50-year period (1967-2016) is shown. The models tend to underestimate the observed values for all four seasons, and a wide range exists for all seasons (although smallest for summer). However, the models do seem to at least capture the general annual cycle in that the snow cover reaches a maximum in winter and a minimum in summer, with intermediate values for the spring and autumn/fall. This qualitative replication of the annual cycle has been noted by others and suggests that there is at least some realism to the models [11,14,16].

Because each model run implies a different average snow cover and these values are typically smaller than the observed averages, a direct comparison between the absolute trends can be challenging. Thus, for the rest of the paper, each time series is converted into an anomaly time series relative to the 1967-2016 average.

The observed 1967-2016 linear trends are compared to the equivalent trends of each of the 196 CMIP5 runs for each season (Figure 3). Although the observed annual linear trend is consistent with that of the models (Figure 3e), we can see that this is due to the fact that the models significantly underestimate the negative summer trend (Figure 3c) and, to a lesser extent, that of spring (Figure 3b), while failing to predict the positive trends for winter (Figure 3a) and autumn (Figure 3d). In other words, the models poorly describe trends for three of the four seasons (winter, summer and autumn) and fare little better at describing trends in the spring.

fig2

Figure 2. Estimated 1967-2016 Northern Hemisphere snow cover area for each of the 196 CMIP5 simulations for (a) winter, (b) spring, (c) summer, (d) autumn/fall and (e) annual averages. The observed areas for each season (or yearly average) are indicated with dashed blue lines in each panel.

fig3

Figure 3. Distribution of linear 1967-2016 Northern Hemisphere snow cover trends for all 196 CMIP5 simulations for (a) winter, (b) spring, (c) summer, (d) autumn/fall and (e) annual averages. The observed trends for each season (or yearly average) are indicated with dashed blue lines in each panel.

As discussed in the introduction, it can be misleading to limit a comparison of climate models and observations to the linear trends over a single time-period, since the observed time-series are not linear in nature. Technically, a linear trend can be nominally computed for any interval, but if the time series is non-linear in nature then this can misleadingly imply a “linear” nature to the data which is absent.

To remedy this, the entire time-series is plotted (Figure 4(a)) for observed annual snow cover (relative to the 1967-2016 mean). Annual snow cover was lower after the mid-1980s relative to what it was before the mid-1980s. Thus, the linear trend implies a long-term decrease of -25,000 km2/year, but this is largely an artefact of the step-like drop in the mid-1980s [25-26]. Indeed, the last two years had above-average snow cover.

The multi-model means of all 196 CMIP5 runs (Figure 4(b)) shows that unlike the observations, the model-predicted trends are reasonably well-described in terms of a decreasing linear trend (-30,000 km2/year). Qualitatively, this can be seen by visually comparing the two plots – the observations plot (Figure 4(a)) shows a considerable amount of yearly variability, while the multi-model mean (Figure 4(b)) shows a gradual but almost continuous decline from 1967 to the present.

While the linear fit associated with the multi-model mean has an r2 of 0.93, that associated with the observations is only 0.19. Due to the long time-period, all linear fits are statistically significant (p=0.0014 for the observations and p=10-28 for the multi-model mean). Also, the error bars (uncertainty) associated with the linear fits are much greater for the observations (±15,000 km2/year) than that for the multi-model mean (±2,000 km2/year).

On this basis, the current climate models appear to be unable to explain the observed trends and are therefore inadequate. However, one might disagree because the multi-model mean is largely determined by the “external forcings” that are input to the models and does not reflect the “internal variability” of individual model runs.

With the current climate models, global snow cover is largely dictated by global temperatures (hence they predict that global snow cover should decrease due to the predicted human-induced global warming from greenhouse gases). If a simulation run is adequately equilibrated and not majorly affected by “drift”, then the global temperatures for a given year are mostly determined by:

  1. External Radiative Forcing from “anthropogenic factors”. This includes many factors, but atmospheric greenhouse gas and stratospheric aerosol concentrations are the main components.
  2. External Radiative Forcing from “natural factors”. Currently, models consider only two: changes in Total Solar Irradiance (“solar”) and stratospheric aerosols from volcanic eruptions (“volcanic”).
  3. Internal variability. This is the year-to-year random fluctuations in a given model run. As we will discuss below, some argue that this can be treated as an analogue for natural climatic inter-annual variability.

As Soon et al. (2015) noted, the CMIP5 models only consider a small subset of the available Total Solar Irradiance estimates, and each of the estimates in that particular subset implied that solar output has been relatively constant since the mid-20th century (perhaps with a slight decrease) [32]. Meanwhile, the “internal variability” of each model yields different random fluctuations (since they are random). Therefore, the internal variability of the models tends to cancel each other in the multi-model mean.

fig4

Figure 4. Annually-averaged trends in Northern Hemisphere snow cover (relative to 1967-2016) according to (a) observations; (b) the CMIP5 multi-model mean; (c) the CMIP5 simulation equivalent to +1 S.D.; (d) the median CMIP5 simulation; (e) the CMIP5 simulation equivalent to -1 S.D. The uncertainty ranges associated with the linear trends correspond to twice the standard error associated with the linear fit.

Thus, the 1967-2018 trends of the multi-model mean are almost entirely determined by the modelled “anthropogenic forcing” (a net “human-induced global warming” from increasing greenhouse gases) and short-term cooling “natural forcing” events from the two stratospheric volcanic eruptions that occurred over that period (i.e., the El Chichón eruption in 1982 and the Mount Pinatubo eruption in 1991). However, clearly, the observed trends in annual snow cover (Figure 4(a)) are more complex than that relatively simple explanation.

There seem to be broadly two schools-of-thought within the scientific community on the relevance of the multi-model means. Some researchers argue that the “internal variability” of the climate models is essentially “noise”, and that by averaging together the results of multiple models you can improve the “signal-to-noise” ratio, e.g., [1,32,44-47]. Others disagree and argue that this random noise is a “feature” of the models which can somehow approximate the “internal variability” of nature, e.g., [15-18,29,48-51]. Both camps agree that because the random fluctuations are different for each model run, they cancel each other out in the multi-model ensemble averages. Where they disagree is whether this is relevant for comparing model output to observations.

While we have demonstrated that the multi-model mean cannot fully explain the observed trends in annual snow cover, it is important to also consider the possibility that this is due to the lack of “internal variability” in the multi-model means. There are several methods to address this. For example, when comparing observed and modelled Arctic sea ice trends, Connolly et al., 2017 [1] considered both the multi-model mean and the median model run (in terms of long-term sea ice trends). Rupp et al. (2013), by contrast, used the model output from “pre-industrial simulations” that were run without any “external forcing” to estimate the “internal variability”[15], and Mudryk et al. (2014) used an ensemble of 40 model runs that all used the same climate model and identical “external forcing” [16]. Other groups have calculated confidence intervals from the entire range of model output (e.g., the upper 5% and lower 5%) [17-18,29,45-51].

Here, we consider the “internal variability” of the models by analyzing three representative model runs. All model runs were ranked according to their 1967-2016 linear annual trend (Figure 2e). The mean and standard deviation was calculated for all 196 model runs. We then identify (i) the median model run and the model runs whose linear trends are closest to (ii) +1 standard deviation and (iii) -1 standard deviation (Figure 4(c)-(e)).

Comparing individual model runs to observations yields a more favourable comparison than using the multi-model mean. That is, the individual runs show more year-to-year variability than the multi-model mean. Nonetheless, the individual models still poorly explain the observed trends and all three selected models (i.e., the median and +/- one standard deviation) suggest a fairly continuous long-term decrease in snow cover extent.

If the lack of internal variability in the multi-model mean is proposed as the explanation for the discrepancies between the multi-model mean and the observations, then this does not vindicate the robustness of the climate models. Rather, it merely argues that the models are “not totally inconsistent with” the observations. This argument becomes weaker when the individual seasonal trends are examined.

The above analysis is repeated but for each of the seasonal averages instead of the annual averages – winter (Figure 5); spring (Figure 6); summer (Figure 7) and autumn (Figure 8). Note that the median, +1 standard deviation and -1 standard deviation model runs for each of these seasons are not necessarily the same as for the annual averages.

First, consider the modelled winter (DJF) snow cover trends compared to observations (Figure 5). Climate models predict a long-term decrease in winter snow cover, but this has not been observed. Indeed, the observations imply a net increase in winter snow cover, although this is not statistically significant. At any rate, since the start of the 21st century, snow cover has mostly been above the 1967-2016 average.

Collectively, climate models predict a statistically significant decrease in winter snow cover which has not been observed (Figure 5(c)), even after more than fifty years of observations. However, for some of the models (e.g., the +1 standard deviation model), the modelled decrease is not statistically significant.

fig5

Figure 5. As for Figure 4, except for the winter season (DJF).

Results for spring (MAM – Figure 6) are more encouraging for the climate models although notable discrepancies still exist between the modelled and observed trends. Perhaps this partially explains why this is the season which has received the most attention, e.g., [11,13-18,29-30].

Although the trends are all negative, the magnitude of the observed trend is greater than most of the models had predicted – this can also be seen from Figure 3(b). This has already been noted by others [11,14-18,30] although the typical implication is that the models perform well but simply “underestimate” the rate of the human-induced global warming to which the decrease is attributed. Derksen and Brown (2012), for example, imply that the discrepancy is “…increasing evidence of an accelerating cryospheric response to global warming” [30, p5].

Such an explanation is flawed. If the reason the models underestimate the negative trend in snow cover in spring (and summer) is because the models underestimate the effect of human-induced global warming, then their failure to explain winter and autumn is even more significant. Moreover, as previously noted, most of the decrease in spring snow cover occurred as a step-like behaviour in the late-1980s [24-26] and the two most recent years (2017 and 2018) had values above the 1967-2016 average.

Like spring, modelled trends in summer (JJA – Figure 7) are negative, commensurate with the observed trends. However, this is where the similarities end. The observed decrease in summer is greater than for spring, but the modelled decline is much more modest for summer. That is, the discrepancy between the modelled and observed trends is even greater for summer than spring – which is particularly striking (see Figure 3(c)). A partial explanation might be that the climate models significantly underestimate the total summer snow cover (see Figure 2(c)). However, the models poorly explain the observed summer trends.

Trends for autumn/fall (SON – Figure 8) are broadly similar to those for winter, but the contrast between the observed and modelled trends is even greater. Although the observed autumn snow cover decreased in the late-1970s, it has mostly been above the 1967-2016 average since the early-1990s (Figure 8(a)). As for the other seasons, all models imply an almost continuous decline in autumn snow cover which is not reproduced in the observations.

Brown and Derksen (2013) suggest that the Rutgers dataset overestimates the October snow cover extent for Eurasia in recent years [40] which could partially explain some of the disagreement among the models [16,40]. However, we note that the Rutgers dataset is likely to be reasonably accurate because the weekly satellite-derived charts from which it is constructed are used operationally and are manually evaluated by a human team for accuracy [27].

fig6

Figure 6. As for Figure 4, except for spring season (MAM).

fig7

Figure 7. As for Figure 4, except for summer season (JJA).

fig8

Figure 8. As for Figure 4, except for autumn/fall season (SON).

3.2. Comparison of CMIP5 climate modelled March/April trends to the updated Brown and Robinson (2011) time series (1922-2018).

Observed spring snow cover trends for the Northern Hemisphere obtained from the updated Brown and Robinson time series are compared to those of the climate models (Figure 9). Since the original time series covers only the period, 1922-2010 [28], trends for this period are only analysed for this 88-year period although all series are plotted to the most recent data point (i.e., 2018).

Results are similar to those of Figure 6. While all series imply a negative trend, the observations imply a greater decrease in snow cover than the models. However, the pattern of the trends for the models are distinct from the observations. The models imply there should have been a gradual, but almost continuous decrease since the latter half of the 20th century while the observed trends are more consistent with the step-like decrease in the late-1980s – as has already been noted by others [24-26]. The observed annual variability is quite substantial.

The strongly non-linear nature of the observed trends implies that reporting the data in terms of a “linear trend” (over some fixed period) is highly misleading. However, we note that this is essentially what the IPCC did in their 5th Assessment Report:

There is very high confidence that the extent of Northern Hemisphere snow cover has decreased since the mid-20th century (see Figure SPM.3). Northern Hemisphere snow cover extent decreased 1.6 [0.8 to 2.4] % per decade for March and April, and 11.7 [8.8 to 14.6] % per decade for June, over the 1967 to 2012 period. During this period, snow cover extent in the Northern Hemisphere did not show a statistically significant increase in any month.” [29, p7-8]

Their Figure SPM.3 refers to a plot of the Brown and Robinson (2011) March/April “spring” snow cover time series (apparently updated to 2012 using the Rutgers dataset). Based on these data, we agree that the Northern Hemisphere spring snow cover extent decreased over the 1967-2012 period. However, the data before and after that period (1967-2012, Figure 10) shows a general increase in snow cover. In hindsight, the decision by the IPCC to emphasize the linear trends for such a specific period was unwise and considerably misleading.

We do not wish to read too much into the fact that the linear trends before and after the 1967-2012 are positive – indeed the trends are not statistically significant. Rather, we want to stress that the time series is strongly non-linear and describing the series in terms of a single linear trend (over any time period) is inappropriate. An example of a more appropriate method of considering the non-linear nature of the time series is shown in Figure 11.

Figure 11 shows the time-frequency wavelet analysis of the spring snow cover from 1922-2018 using the algorithm recently introduced by Velasco et al. (2018)[52] and Soon et al. (2019)[53]. The result mainly illustrates the rich spectral content of the spring snow cover where primary modulation with principal periodicities at 23, 7, 4 and 2.4 years detected. Such observed oscillations do not appear to be adequately accounted for by the CMIP5 models. We have already mentioned that the CMIP5 models neglected to consider any of the published high solar variability estimates of Total Solar Irradiance [32], so this could explain the poor performance of the models. We also note here the importance of considering the changes in short-term orbital forcing which have been of the order of 1-3 W/m2 (depending on season) over the 20th century [54,55].

fig9

Figure 9. As for Figure 4, except for the updated Brown & Robinson (2011) March/April “Spring” estimate, and covering the period 1922-2018.

fig10Figure 10. (a) Comparison of the 1967-2012 linear trend in Northern Hemisphere spring snow cover extent (red dashed line) discussed by the IPCC AR5 Working Group 1 report with the trends before and after that period (blue dashed lines). (b) How the “linear trend” changes depending on the start and end years of the time period chosen.

Figure 11. Time-frequency wavelet power (middle panel) of the NH spring snow cover from 1922-2018. Top panel shows the original time series while the left panel shows the global time-averaged spectrum indicating significant (95% above the adopted reference red noise spectrum) oscillations at about 23, 7, 4 and 2.4 years.

4. Discussion and conclusions

In this paper, the observed changes in Northern Hemisphere snow cover extent since 1967 are compared to the changes predicted by the CMIP5 climate models. In total, 196 climate model runs (taken from 24 climate models and 15 climate modelling groups) were analysed. A longer time series that was also available for Northern Hemisphere spring (March-April), beginning in 1922 [28], was also compared to the equivalent climate model predictions.

According to the climate models, snow cover should have steadily decreased for all four seasons. However, the observations show that only spring and summer demonstrates a long-term decrease. Indeed, the trends for autumn and winter suggest a long-term increase in snow cover, although these trends were not statistically significant. Moreover, the decrease in spring (and to a lesser extent, summer) was mostly a result of an almost step-like decrease in the late-1980s, which is quite different from the almost continuous gradual decline expected by the climate models.

The CMIP5 climate models expected the decline in snow cover across all seasons because they assume:

1) Northern Hemisphere snow cover trends are largely determined by their modelled global air temperature trends.

2) They contend that global temperature trends since the mid-20th century are dominated by a human-caused global warming from increasing atmospheric greenhouse gas concentrations [29].

The fact that the climate models expect snow cover trends to be dominated by a human-caused global warming is confirmed by the formal “detection and attribution” studies of spring snow cover trends [15,17]. However, the inability of the climate models to accurately describe the observed snow cover trends indicates that one or both assumptions are problematic. Several possible explanations exist:

a) The models may be correct in their predictions of human-caused global warming, yet are missing key atmospheric circulation patterns or effects which could be influencing Northern Hemisphere snow cover trends [36-37,41-42].

b) The models might be overestimating the magnitude of human-caused global warming, and thereby overestimating the “human-caused” contribution to snow cover trends. This would be consistent with several recent studies which concluded that the “climate sensitivity” to greenhouse gases of the climate models is too high [56-58].

c) The models might be underestimating the role of natural climatic changes. For instance, the CMIP5 models significantly underestimate the naturally-occurring multidecadal trends in Arctic sea ice extent [1]. Others have noted that the climate models are poor at explaining observed precipitation trends [47-48,59], and mid-to-upper atmosphere temperature trends [44-46].

d) The models might be misattributing natural climate changes to human-caused factors. Indeed, Soon et al. (2015) showed that the CMIP5 models neglected to consider any high solar variability estimates for their “natural forcings”. If they had, much or all of the observed temperature trends could be explained in terms of changes in the solar output [32].

It is possible that more than one of the above factors is relevant, therefore we would encourage more research into each of these four possibilities. At any rate, for now, we recommend that the climate model projections of future and past snow cover trends should be treated with considerable caution and skepticism. Changes in the Northern Hemisphere snow cover have important implications for society [6] and local ecosystems [7]. Therefore, it is important that people planning for future changes in snow cover do not rely on unreliable projections.

One short-cut which regional and global climate modellers could use to potentially improve the reliability of their snow cover projections is to apply “bias corrections” to bring the hindcasts more in line with the observations. This is a technique which has now become a standard procedure in climate change impact studies, e.g., see Ehret et al. (2012) [60]. However, we agree with Ehret et al. (2012) that any such bias corrections should be made clear and transparent to the end users [60].

In the meantime, more than 50 years of satellite data exist (Figures 4(a), 5(a), 6(a), 7(a) and 8(a)) to estimate the climatic variability in snow cover for each of the seasons, as well as nearly 100 years of data for spring snow cover (Figure 9(a)). Consequently, the observed historical variability for each of the seasons is a far more plausible starting point than the current climate model projections for climate change adaptation policies.


Supplementary Materials: The various time series are available online at https://www.mdpi.com/2076-3263/9/3/135#supplementary

Author Contributions: RC, MC and WS carried out most of the conceptualization, methodology and formal analysis, but all authors contributed to the writing of this article.

Funding: Two of us (RC and WS) received financial support from the Center for Environmental Research and Earth Sciences (CERES), http://ceres-science.com/, while carrying out the research for this paper. The aim of CERES is to promote open-minded and independent scientific inquiry. For this reason, donors to CERES are strictly required not to attempt to influence either the research directions or the findings of CERES.

Acknowledgments: We acknowledge the World Climate Research Program’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modeling groups (listed in Table 1 of this paper) for producing and making available their model output. For CMIP the U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led development of software infrastructure in partnership with the Global Organization for Earth System Science Portals. We are grateful to Prof. Geert Jan van Oldenborgh for creating and maintaining the KNMI Climate Explorer website, https://climexp.knmi.nl/, which we used to obtain the CMIP5 hindcasts/projections data and to the Rutgers University Global Snow Lab, https://climate.rutgers.edu/snowcover/, for constructing and maintaining their Northern Hemisphere snow cover datasets.

Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

Available in the full PDF of the paper here: https://www.mdpi.com/2076-3263/9/3/135/pdf

© 2019 by the authors. Submitted for possible open access publication under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Advertisements

49 thoughts on “Northern Hemisphere snow cover trends (1967-2018): A comparison between climate models and observations

  1. Accuweather is calling this winter the “wettest” on record …
    all time snowfall records set …
    children won’t know what snow is …

    • I haven’t heard anyone yet blame the heavy snows this winter on- drum roll — CLIMATE CHANGE. Let me be the first to blame climate change for the snow

      • [“I haven’t heard anyone yet blame the heavy snows this winter on- drum roll — CLIMATE CHANGE. Let me be the first to blame climate change for the snow”]

        Sorry, but Michael Mann beat you to it:
        “”Despite the antics of climate change-denying politicians … the increased snowfall amounts associated with record-strength Nor’easters (and ‘bomb cyclones’) is symptomatic of, rather than evidence against, human-caused planetary warming,”
        https://www.ecowatch.com/flooding-nebraska-bomb-cyclone-2631998694.html?rebelltitem=5#rebelltitem5

        • The IPCC and its paid government scienti…ahhh accomplices Made the turn from Global Warming to Climate Change right before they funded the Dennis Quaid docu-drama “day after tomorrow” or some such science fiction/fanatsy.

          At that point they staked out their new territory that anything they could sensationalized could be pinned to the new Climate Change moniker and “see we told you so” became their new motto.

  2. Fantastic work! Even a cursory examination of the Rutgers Snow Lab data shows that snow cover hasn’t declined since the end of the Ice Age Cometh era (late 1970’s to early 1980’s).

    • I agree, nice detailed work. Wavelet analysis demonstrates that NH Snow Cover exhibits natural oscillation cycles; a strong continuous 7-8 year event pre-2000 and higher frequency discontinuous 2-3 year cycles post 2000. Climate models do not recognize these. Also, around this timeframe autumn and winter snow cover begins to increase significantly diverging from spring and summer trends.
      https://i.imgur.com/lUG6m9o.png

    • Tom, that’s an assumption on your part.

      The ways the models are wrong are so many. And the models’ biggest problem is not the models assumptions in their construct per se
      It is the human hands behind the curtain pulling a given model’s levers and then claiming to believe the output as faithfully representing some future climate state that I find indefensible.

      • “One short-cut which regional and global climate modellers could use to potentially improve the reliability of their snow cover projections is to apply “bias corrections” to bring the hindcasts more in line with the observations. This is a technique which has now become a standard procedure in climate change impact studies, e.g., see Ehret et al. (2012) [60]. However, we agree with Ehret et al. (2012) that any such bias corrections should be made clear and transparent to the end users [60].”

        What bothers me here is, what is it?

        What is the model for? An annual snow forecast?

        Or is it some laughable pretense of, “climate forecasting”?

        Actual climate-change occurs on a time-scale that no one needs a forecasts for, let alone could plan 200 years ahead based upon a possible blip in the climate trend, nor hope to test and debunk (i.e. develop) such a model, in any practical way to improve it. The time scale doesn’t allow that.

        So … what is the model for? What’s the use of it when it needs fudging just to match WX OBSERVATIONS and the fudge is admittedly standard practice? Why is it so, at all? What’s even the point when simply looking at observational stats and a tend-line provides the same thing on an annual basis?

        I take weather models seriously as they have none of those testability problems, they get tested and improved all the time, but climate models are being openly fudged with weather data, then pretend to be relevant in some way to ‘climate studies’! It’s a joke. This is not even science. It’s a bunch of quacks pretending climate models can predict actual climate-change trends via simply fudging them with weather-noise to make them not look so hopeless and incapable of doing any of what they aspire to be, or to do.

        The only way to get a ‘climate model’ to predict anything that’s actually quasi-‘useful’ for human planning is to develop a time-machine first so you can test it. But if you had a time machine you wouldn’t need a climate model at all … doh!

        i.e. cut all public funding to climate modeling and put it into weather modeling, or just don’t spend it. Oh, and repossess all those misappropriated and misallocated supercomputers, from those ‘climate research’ time-wasting geeks.

        • WXcycles, I agree with most of your comment, however the current climate change cycle of any importance is about 110,000 years. The Earth is currently in an Five million year long Ice Age, with Glacial Cycles lasting about 100,000 years and inter-Glacial Cycles about 10,000 years (with expected variance). We are currently very near the end of the Holocene Inter-Glacial, so, unless you are an avid skier, enjoy it while it lasts.

          • Ron Long

            “We are currently very near the end of the Holocene Inter-Glacial, so, unless you are an avid skier, enjoy it while it lasts.”

            Can you give us a source that supports your claim?

            Here is for example a source contradicting your assumption:
            https://www.sciencedirect.com/science/article/pii/0033589472900567

            I cite:

            Abstract

            We are now living under interglacial climatic conditions, the Present Interglacial or Flandrian Interglacial Age.

            It will certainly be followed by the Future Ice Age. The major cold/warm changes seem to have a cyclicity of 10,500 yr.

            We have been in the second cycle (characterized by cooler climate) after the Last Ice Age for 2200 yr and will continue to be so for another 8300 yr.

            By analogy with the conditions during the Last Interglacial it is concluded that this cycle will remain moderately warm.

            With the end of the third cycle at about 18,800 years AP, the Present Interglacial will end and the First Future Glacial Age begin.

            Further information about the climatic conditions during the “cold” cycle 117,700–107,200 y. a. is necessary, however, before a really well-founded prediction can be made.

            Maybe you have a better source?

          • Bindidon, no one “Can [] give us a source that supports your [ with Glacial Cycles lasting about 100,000 years and inter-Glacial Cycles about 10,000 years (with expected variance) ] claim?”
            ____________________________________

            Ask again after ~20,000 years.

        • YOU ARE RIGHT. The whole exercise is pointless. Better to spend the money looking for Earth orbit crossing asteroids. One of these could actually be an existential threat and predicting the possibility of an Earth strike a much more precise science. Of course if there are as many liars there as is in climate “science” that could be wasted funding as well. You may have noticed that I have lost almost all my childlike faith in “science.”

        • I have to agree with Ron Long’s ‘nearing the end of this interglacial’.

          I have too many photos taken over more than a few years, of snow falling very late in the spring and very early in the fall, to think otherwise. It isn’t only when it starts and ends, as much as the volume of precipitation and where it lands that also count.

          • Sara

            “I have too many photos taken over more than a few years, of snow falling very late in the spring and very early in the fall, to think otherwise.”

            Where do you live, Sara?

            Here in Northern Germany, we had no snow for the third winter in sequence, and a centennial summer following an extremely mild winter, and followed by again a very mild one.

            With the exception of Denmark, all Scandinavian countries experienced unexpected warmth. Even UK surprisingly did too.

            Similar contrasts were observed in Southern Europe: while Spain is becoming warmer and warmer, the average temperatures in Greece are falling down.

            Thus: “Rien n’est simple, tout est compliqué!” (https://tinyurl.com/y5atcad2)

    • Reality is only one whereas models are many, in this case 15.

      By applying simple logic, at least 14 models are wrong, and is you compare models to reality, all 15 are wrong.

  3. When I see CERES, I think of the deep pocketed Green Blob coordinating center of climate corruption that is CERES.org.

    That CERES has a mission that reads: “At Ceres, we work to advance sustainability leadership among investors, companies and capital market influencers to drive solutions and take stronger action on the world’s biggest sustainability challenges, including climate change, water scarcity and pollution, and human rights abuses.”
    It is the corrupt nexus bringing together Green money, Blue state employee retirement funds invested for virtue signalling, and social justice Liberal causes.

    The CERES-science group however, “the Center for Environmental Research and Earth Sciences (CERES), is a multi-disciplinary and independent research group. The aims of CERES are to address important issues in the fields of environmental and earth sciences. The group strives to foster original and timely scientific understanding, in addition to re-examining old analyses with fresh insights. We hope to illuminate, enhance, and resolve new and open issues.

    Te former is blatantly partisan both in politics and ideology. CERES.org openly advocates for both climate alarmist issues and social justice issues. They wear the partisanship badge with pride.
    The latter, CERES-science, is working very hard to be non-partisan — to evaluate the evidence against the climate model outputs, but other alarmists claims were observations (as here in this manuscript).

    The sad part is CERES.org has far more money to throw around. But even though they claim to be about sustainability, you will not find one bit of support from CERES.org for nuclear power. That fact tells all one needs to know about CERES.org ‘s Big Lie of sustainability claims. (a search for the term “nuclear power” on its web site gives the top return as a 3-year old report of CERES commending PG&E for deciding to mothball Diablo Canyon NP plant in Cali).

    What this simply tells me is that CERES.org needs and depends on climate and energy scarcity to be a partisan induced issue. For in partisanship and energy scarcity one can divide and conquer in the pursuit of political power.

    • When I see CERES I assume NASA’s Cloud and the Earth’s Radiant Energy System. I did not know of CERES-Science.com until this paper or CERES.org till the above post.

      Irrespective of the various CERESs, I consider this paper stands on its merits. Typically any output from climate models forecast steadily increasing or decreasing data sets, depending on what is deemed bad for the variable. The period of tuning is the only region where they have some variability that is close to the input data. Beyond that there is no variability, just steady increase or decrease.

      This paper reaches the same conclusion as any other granular analysis of data from climate models, albeit the wording of their weaknesses is kind. A more common description is typically heaped, smells and steams.

      I am looking forward to see how the additional solar data is handled in the CMIP6 runs. The fact that more solar variables are included is recognition that CO2 is not the sole climate control knob.

  4. My theory is the more snow that melts ends up in our oceans where it evaporates into the upper atmosphere where it cools and turns to snow. So the more snow the more warming we are having. The colder it is the hotter it is. Now i need lots more grant money to fill in some blanks. 10 mill a year for the rest of my life should do plus a few trips overseas to study my brilliant theory.
    Ps. I went to sarcasm university and graduated with honours.

  5. I pointed out in another thread that a linear regression analysis of the so-called “global average temperature” was misleading as to what was actually happening, very akin to this analysis of snow cover. A second or evern third order regression would give far more information on what was actually going on, especially when considering short-term recent data which are better indicators of possible future trends than a long-term linear regression. A higher order regression will more completely show the step function that the “global average temperature” saw in the late 90’s and early 00’s coupled with a slight increase after the step. A linear analysis just way over-emphasizes the the impacts of the step function on future trends by artificially increasing the slope of the linear trend. Figures can lie and liars can figure!

    • You forget that climate change alarmism is Marketing 101. They must use the KISS principle to dupe the ignorant masses.
      If you start talking 2nd, 3rd, higher order polynomial regression fitting their eyes glaze over.
      Effective Propaganda requires a the Big central lie to be simple and hammered home over and over. With no uncertainty. That is what the climate hustle does.

  6. “Also, it was noted that the climate models did at least qualitatively replicate the overall annual cycle, i.e., the fact that snow cover increases in the autumn and winter and decreases in the spring and summer.”

    Whoop-de-whoop, what a surprise! Nothing wrong with the models then…

  7. “The fact that the climate models expect snow cover trends to be dominated by a human-caused global warming is confirmed by the formal “detection and attribution” studies of spring snow cover trends [15,17].”
    The fact that the models work on RCP’s to predict the warming is based on the erroneous assumption that human emissions control the CO2 content in the atmosphere. Even if the rest was accurate, which this paper shows to be false, there would be no information in their projections for policy makers.
    In my opinion this is fine work and demonstrates the weakness of the models for policy but the all important fact that human emissions are not responsible for any of the CO2 related affects needs to be stressed as well.

    • Actually snow area is probably one of the least autocorrelated climate variables, at least during interglacials.

      • Do you have a reference – can’t see anything that suggests it isn’t an issue? Anyway easy enough to test for.

  8. Figure 11 Changes in “linear trends” depending on the start and end years chosen

    End year =1960 starts from positive zone in y-axis
    End year =1980 starts near zero on y-axis
    End year =2000, 2012 & 2018 all start from negative zone on y-axis

    Should’t all the lines start at 0 (zero) on the y-axis in the year 1922 ?

  9. Please, someone who knows about these things explains to an ignoramus. In setting up a computer model to predict things, would it not be usual to make sure that it could reproduce the historic data? Is this normal practice and if so, what went wrong here?

    • Historical data-matching models are not very good at predicting future results in a chaotic system like the Earth. Like all the financial guru’s say: “past performance is no guarantee of future performance”.

      If the fundamental science isn’t there in the models then the models aren’t really worth much, at least in my opinion. If the models can’t predict what is going to happen in the next 10 years then how can they predict what is going to happen in the next twenty years? Fifty years? 100 years? And we know the models have been very bad at accurately predicting even the next ten years! For instance, we’ve been told since 2000 to expect global crop failures resulting in mass starvation. Yet four out of the last five years set consecutive records for global grain harvests! Last year probably would have set another record except for *cool*, wet weather in Brazil and Siberia! Yet the climate alarmists just shrug this off and say: “wait till next year! We will be proven correct!”

    • Susan,

      Yes they should, but it needs to be done the right way. If you build a “model” and then start tweaking coefficients and parameters to get the match, all you’ve done is a mathematical curve fit that has no predictive value as Tim mentioned.

      To do it properly you have to add or remove terms and boundary conditions until you come closer to reality. In other words if the CO2 variable alone isn’t working, add an equation for variable TSI, and if that isn’t working, add an equation for variable orbital position, and if that isn’t working add an equation for variable water content in the atmosphere,etc. It is acceptable to use parameterized variables, but those parameters need to be determined and set through independent experimentation (empirical analysis) performed through the entire range of expected model output, then that parameterized equation is placed in the model. In other words, you completely rebuild the model.

  10. It just suddenly dawned on me that we need 10,000 models each running 10,000 runs in order to properly forecast the future. That way we can have a graph with temperatures from fireball earth to iceball earth and can claim that we have an accurate PROJECTION of what will happen.

  11. Models are not even normally distributed. What is the point in calculating averages?
    All of them are wrong.

  12. Has anyone ever seen a paper that did not end with ‘more research is needed’ ie send more money?

  13. The models…all if them…predict more snow in the fall and winter than spring and summer.

    Did they really report that?

  14. So this massive coverage of low albedo surface area is reflecting uncountable amounts of solar heat! Thus we should see a significant DROP in worldwide temperatures (we’ve been WARNED that LOSS of snow cover would overheat the planet due to an increase in high albedo surface area). But of course we won’t … this year and next will become the 3rd and 2nd HOTTEST years on record. You can COUNT on it … it’s what the CAGW LIARS do … LIE and distort.

  15. For a model to be believed it has to predict near-term results (match observations) quite closely to be believed for long-term projections.

    If the long term results (outputs) are validated by observations, then the model can be claimed to produce predictions.

    Let’s not be hasty. So far, the models do nothing right save by chance, or so it appears. If 100 things are predicted and 5 of them are spot on, it proves nothing about the model, even if all of them are concentrated on one portion of a year. That could be an accident too.

    If you read a broken clock twelve times a day, it will be “close” 17% of the time. Is that impressive?

  16. Possible explanation. A warming Arctic brings more precipitation. In cold autumn and winter this survives.
    But warmer spring and summer melts more of that snow. But spring-summer is when Arctic insolation maximizes. How much does less snow-cover decrease surface albedo?

    • This might be true for Canada, which lies north of the jet stream. But for most of the continental US the moisture that becomes snow seems to originate either in the Pacific, the Gulf, or the North Atlantic, i.e. south of the jet stream. Cold fronts may come from he North but the moisture doesn’t seem to, at least according to most weather reports during the winter. I’m not sure your explanation works.

  17. More snow, less snow, no snow…..no matter what condition is observed it will always be linked to CO2 by the climate changers. Are we still expected to believe the climate models that forecast much warmer weather ahead for planet Earth when short term weather models can’t get it right a couple of day out?

  18. At least something is clearly correct in this otherwise rather fantastically chaotic “study”:

    “the four Northern Hemisphere seasons: winter (December, January and February), spring (March, April and May), summer (June, July and August) and autumn/fall (September, October and November).”

    “Separate analyses were conducted for the annual averages and for each of the seasons (winter, spring, summer and autumn/fall).”
    ___________________________________________________

    With “climate science” it’s refreshing when there’s something. Right.

Comments are closed.