I discovered this climate model failure a while ago, but haven’t published a post about it because, if I were to compare the modeled and observed sea ice area for each hemisphere, I would need to make too many approximations and assumptions. The reasons: The NSIDC sea ice area data through the KNMI Climate Explorer is presented in millions of square kilometers, while the CMIP5-archived model outputs there are presented in the fraction of sea ice area—assumedly a fraction of the ocean area for the input coordinates.
I decided to take a simpler approach with this post—to show whether the models simulate a gain or loss in each hemisphere.
That is, we know the oceans have been losing sea ice in the Arctic since November 1978, but gaining it around Antarctica. See Figure 1.
Figure 1
Then there are the oodles of climate models stored in the CMIP5 archive. They’re the models being used by the IPCC for the upcoming 5th Assessment Report. Would you like to guess whether they show the Northern and Southern Hemispheres should have gained or lost sea ice area over the same time period?
The multi-model ensemble mean of their outputs indicate, if sea ice area were dependent on the increased emissions of manmade greenhouse gases, the Southern Ocean surrounding Antarctica should have lost sea ice from November 1978 to May 2013. See Figure 2.
Figure 2
Well at least the models were right about the sea ice loss in the Northern Hemisphere. Too bad for the modelers that our planet also has a Southern Hemsiphere.
We could have guessed the models simulated a loss of sea ice around Antarctica based on their simulation of the sea surface temperatures in the Southern Ocean. As illustrated in the most recent model-data comparison of sea surface temperatures, here, sea surface temperatures in the Southern Ocean have cooled, Figure 3, while the models say they should have warmed.
Figure 3
STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN
We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.
The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:
The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:
If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.
The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):
Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.
In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.
CLOSING
Just add sea ice onto the growing list of variables that are simulated poorly by the IPCC’s climate models. Over the past few months, we’ve illustrated and discussed that the climate models stored in the CMIP5 archive for the upcoming 5th Assessment Report (AR5) cannot simulate observed:
Satellite-Era Sea Surface Temperatures
Global Surface Temperatures (Land+Ocean) Since 1880
And in an upcoming post, we’ll illustrate how poorly the models simulate daily maximum and minimum temperatures and the difference between them, the diurnal temperature range. I should be publishing that post within the next week.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Something new for the models
“Sulfate aerosols cool climate less than assumed”
http://www.mpic.de/en/press/press-information/max-planck-institute-for-chemistry-broadens-network-to-india/sulfate-aerosols-cool-climate-less-than-assumed.html
Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
================
chaos cannot be averaged in this fashion. noise can be averaged because it is randomly distributed positive and negative. the law of large numbers tells us that as the sample size increases, the noise will average out to zero.
however, it is well recognized that weather is chaotic. when you average weather to get climate you are not averaging noise, you are averaging chaos. while chaos looks like noise, it is not. chaos is not subject to the law of large numbers. it lacks the constant mean and deviation required for the law of large number to hold.
thus, when you try and average chaos over time it does not average out to zero. rather it wanders in an unpredictable fashion. this leads to spurious (false and misleading) trends when you try and do regression analysis (fit a trend line) on the data. What looks like a real trend (warming or cooling) is simply the orbit of the system around it attractors. not real trends at all as we typically think of them. rather cycles that never repeat identically. snowflakes that all look similar, yet no two are the same.
Pamela Gray says:
June 15, 2013 at 8:19 am
The runs are completed to demonstrate this noise and its average anomaly. Which should eventually cancel to 0 if enough runs are completed.
==============
however, in a chaotic system, the “noise” only averages out to zero at infinity.
the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.
however, you cannot average chaos in this fashion except over very short time periods when the system is locally orbiting an attractor. the longer the system runs, the more likely it is to wander off towards another attractor, rendering your carefully calculated average meaningless.
For example, when you look at the earth’s average temperature. Over the past 80 years you will get one number, but if you increase the time period 8 thousand years to include all of the Holocene, you will get a higher number, but if you further increase the time period 80 thousand to include the previous ice age you will get a lower number. If you increase the time period to 80 million years, you will get a higher number.
which of there four numbers is the correct average temperature of the earth? they are all different. which one is the correct? if they are all correct, how can we calculate a meaningful average? That is the problem with chaotic systems. As you increase the scale you get a different result. And it keeps changing all the way out to infinity.
Doug Proctor says:
June 15, 2013 at 11:48 am
When we look at an ensemble of outcomes, i.e. Scenarios, we see the variability dependent on specific situations that arise, the various situations representing either the noise or the potential variation in important parameters. The observations we receive represent one, specific situation, which involves both fundamental, unchanging aspects, i.e. radiative forcings of various kinds, and specific instances of the variables. What we see may not be the mean, though, but one of the recognized low potential Scenarios.
In other words, when we see the observations from 1979 to 2013 match the lowest IPCC Scenario, close to “C”, we see that observations come closer to the 5% chance, but that does not mean that the mean is incorrrect. What happened is 100% by occurrence, but was recognized as 5% by procedure. We could also have had the top 5%, i.e. Scenario A+, without the mean being incorrect. Each 5% would simply indicate that the variables, not the fundamentals, conspired to produce what they did. Again, the results do not invalidate the mean.
Not sure what are you calling the “C” IPCC scenario? Would help if you relate to that.
“that does not mean that the mean is incorrect”
How do you come to that conclusion?
” We could also have had the top 5%, i.e. Scenario A+, without the mean being incorrect.”
I doubt very much it could be that way.
To my understanding the scenarios are not different runs with the same inputs with different outcomes, but different inputs in the parameters.
How much CO2 emission did really occur? Do the scenario’s vary with the CO2 output then your logic is very wrong.
To my understanding the human CO2 emissions have exceeded all scenarios, when the temperature has underperformed all.
This shows a total disconnect between the scenarios and nature. Remember the scenarios do not have the physics inside and run. The scenarios have functions that according to the scenario programmers best emulate the result of a combination of different processes, some of which are not yet understood and not tested in practice.
One can run a model many times. If none of the runs emulates the current temperatures, the model is to be scrapped and should not participate in computing the mean.
sorry messed up “models” with “scenarios” above, but you get it…
Bob Tisdale says:
June 15, 2013 at 4:09 pm
You really should learn how to use the KNMI Climate Explorer:
——————————————————————————–
That looks like a great tool. There is so much reading and info to assimilate though. I would like to restart my math skills, also. That will be a major endeavour. Math was my strong suit back in my school days, mostly A,s. I could use that level of mental exercise right about now. On my SAT in the 60s, the math side was about 30 points higher than my reading comprehension.
*Lars P.*
*The model mean is simply the result of looking at all possible outcomes and finding the most common, average-not-extreme path. IF any actual event could be an extreme, the 5% event, the model mean is still real in a mathematical sense. Going forward, however, the model mean only has future meaning if all Scenarios going forward still have the same probability of happening as they did in the beginning. That means that we could still go to +3C in 2100 from today.*
*I’m arguing that PROCEDURALLY what AR5 FIgure 4 as we have seen is correct, and the mean is just one of the possible outcomes, though statistically closer to what is likely to happen than the outside: Scenario C being the low outside. REPRESENTATIONALLY, however, global temperatures tracking at the low end may say that it is incorrect to consider it just a low-probability event that actually occurred (the 1 in 20 poll that was bizarre).*
*Now that we have seen the northern and southern ice behave as they have, although this situation may have been one of the outcomes in the IPCC story, to get to a world-wide flood, we have to change this current situation signficantly. What Scenario does this? Do ANY of the IPCC models take us along the path we have taken AND get us to the Deluge?*
*I do not argue about the math. That argument seems futile, as the PROCEDURE is neither correct nor correct in itself, it just is. The procedure may be, and is, inappropriate to tell us what will happen next, however.*
*The big question is, how do we get to there from here? Can we, and if we cannot, then why are we still looking at Scenario A with its loss of polar ice and the drowning of continental margins?*
Thanks Bob – figure 1 is as good a signature of the bipolar seesaw as I have seen. The bipolar seesaw could according to some (e.g. Tzedakis) signify the approaching end of the current interglacial.
See also this Tzedakis paper: http://www.clim-past.net/8/1473/2012/cp-8-1473-2012.pdf
which was the subject of a thread here by William McClenney:
http://wattsupwiththat.com/2012/10/02/can-we-predict-the-duration-of-an-interglacial/
Fred, agreed. But for the purposes of 200 years out, we don’t need to go back 20 zillion years. Maybe 400. Maybe less. But certainly not just 60 years, or even 80 years. That would not be enough to take into account all the possible natural intrinsic drivers of weather pattern variations.
ferd berple says:
“the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.”
The problem is that weather is not merely internal variation, but is largely externally forced by short term solar factors. That is exactly why it is meaningless to average out 30yrs worth to look for a CO2 signal. With extensive hind-casting and 5+yrs of producing solar based weather forecasts, I can guarantee that weather is far from chaotic, and is highly predictable.
Greg Goodman says:
June 16, 2013 at 1:17 am
Rather than producing an equally meaningless “trend” for the other end of the planet from one day per year series ignoring 364/365th of the available data, maybe we should be using ALL the daily data to show the complementarity of the poles.
I disagree. With sea ice the same cause, say cloud changes, can have effects with the opposite sign between summer/winter and day/night. And looking at minimum and maximum effects helps differentiate between summer and winter effects.
For example, comparing Arctic sea ice min and max area/extent changes shows the loss of ice is wholly a summer effect. No amount of manipulating 365 days data would show you that.
————————————————–
Thanks Bob. I think it gave me the data I wanted.
Ulric Lyons says:
June 16, 2013 at 4:38 pm
ferd berple says:
“the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.”
The problem is that weather is not merely internal variation, but is largely externally forced by short term solar factors.
Its a mixture of the two – the system is likely to be a weakly forced nonlinear oscillator – or set of oscillators. There is external forcing, also internal nonlinear dynamic. Due to weakness of the forcing it might be hard to impossible to resolve the forcing signal from the emergent wavetrain – at least using traditional methods. (Strong forcing means that you have a regular monotonic signal, like summer-winter, spring and neap tides. We don’t see this, thus the forcing is weak and complex.)
@phlogiston
The external forcing is an event series, not oscillatory, it is very strong, and is directly responsible for most short term land temperature deviations and teleconnection statuses including ENSO.
@ur momisugly Bob Tisdale…I notice that once again the daily sst information did not come out. Do they close on Sundays now? The last pic I have is from the 15th. There is no 16th and today is the 17th. This also happened last week.
Also, that Arctic sea ice line is sure staying high as compared to last year. Maybe I should have stuck with 6.0+ as my prediction.
Interesting, the Unisys sst chart for the 17th was skipped and the 18th shows quite a change around southern Greenland. That is the second missing day of data this month.