Model-Data Comparison: Hemispheric Sea Ice Area

I discovered this climate model failure a while ago, but haven’t published a post about it because, if I were to compare the modeled and observed sea ice area for each hemisphere, I would need to make too many approximations and assumptions. The reasons: The NSIDC sea ice area data through the KNMI Climate Explorer is presented in millions of square kilometers, while the CMIP5-archived model outputs there are presented in the fraction of sea ice area—assumedly a fraction of the ocean area for the input coordinates.

I decided to take a simpler approach with this post—to show whether the models simulate a gain or loss in each hemisphere.

That is, we know the oceans have been losing sea ice in the Arctic since November 1978, but gaining it around Antarctica. See Figure 1.

Figure 1

Figure 1

Then there are the oodles of climate models stored in the CMIP5 archive. They’re the models being used by the IPCC for the upcoming 5th Assessment Report. Would you like to guess whether they show the Northern and Southern Hemispheres should have gained or lost sea ice area over the same time period?

The multi-model ensemble mean of their outputs indicate, if sea ice area were dependent on the increased emissions of manmade greenhouse gases, the Southern Ocean surrounding Antarctica should have lost sea ice from November 1978 to May 2013. See Figure 2.

Figure 2

Figure 2

Well at least the models were right about the sea ice loss in the Northern Hemisphere. Too bad for the modelers that our planet also has a Southern Hemsiphere.

We could have guessed the models simulated a loss of sea ice around Antarctica based on their simulation of the sea surface temperatures in the Southern Ocean. As illustrated in the most recent model-data comparison of sea surface temperatures, here, sea surface temperatures in the Southern Ocean have cooled, Figure 3, while the models say they should have warmed.

Figure 3

Figure 3

STANDARD BLURB ABOUT THE USE OF THE MODEL MEAN

We’ve published numerous posts that include model-data comparisons. If history repeats itself, proponents of manmade global warming will complain in comments that I’ve only presented the model mean in the above graphs and not the full ensemble. In an effort to suppress their need to complain once again, I’ve borrowed parts of the discussion from the post Blog Memo to John Hockenberry Regarding PBS Report “Climate of Doubt”.

The model mean provides the best representation of the manmade greenhouse gas-driven scenario—not the individual model runs, which contain noise created by the models. For this, I’ll provide two references:

The first is a comment made by Gavin Schmidt (climatologist and climate modeler at the NASA Goddard Institute for Space Studies—GISS). He is one of the contributors to the website RealClimate. The following quotes are from the thread of the RealClimate post Decadal predictions. At comment 49, dated 30 Sep 2009 at 6:18 AM, a blogger posed this question:

If a single simulation is not a good predictor of reality how can the average of many simulations, each of which is a poor predictor of reality, be a better predictor, or indeed claim to have any residual of reality?

Gavin Schmidt replied with a general discussion of models:

Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).

To paraphrase Gavin Schmidt, we’re not interested in the random component (noise) inherent in the individual simulations; we’re interested in the forced component, which represents the modeler’s best guess of the effects of manmade greenhouse gases on the variable being simulated.

The quote by Gavin Schmidt is supported by a similar statement from the National Center for Atmospheric Research (NCAR). I’ve quoted the following in numerous blog posts and in my recently published ebook. Sometime over the past few months, NCAR elected to remove that educational webpage from its website. Luckily the Wayback Machine has a copy. NCAR wrote on that FAQ webpage that had been part of an introductory discussion about climate models (my boldface):

Averaging over a multi-member ensemble of model climate runs gives a measure of the average model response to the forcings imposed on the model. Unless you are interested in a particular ensemble member where the initial conditions make a difference in your work, averaging of several ensemble members will give you best representation of a scenario.

In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.

CLOSING

Just add sea ice onto the growing list of variables that are simulated poorly by the IPCC’s climate models. Over the past few months, we’ve illustrated and discussed that the climate models stored in the CMIP5 archive for the upcoming 5th Assessment Report (AR5) cannot simulate observed:

Global Precipitation

Satellite-Era Sea Surface Temperatures

Global Surface Temperatures (Land+Ocean) Since 1880

And in an upcoming post, we’ll illustrate how poorly the models simulate daily maximum and minimum temperatures and the difference between them, the diurnal temperature range. I should be publishing that post within the next week.

0 0 votes
Article Rating
67 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Bloke down the pub
June 15, 2013 6:53 am

‘As illustrated in the most recent model-data comparison of sea surface temperatures, here, sea surface temperatures in the Southern Ocean have cooled, Figure 3, while the models say they should have warmed.’
I thought the Antarctic was meant to be melting due to warmer seas(cf warmer air temps).
http://wattsupwiththat.com/2013/06/13/game-changer-antarctic-melt-due-to-warm-water-not-air-temperature/#more-88085
Presumably that’s why Antarctic sea ice is above normal.

Richard M
June 15, 2013 7:13 am

Warm AMO/NAO in the NH. Cold Southern Ocean in the SH. The amount of ice is based on the temperature of the ocean water. What an amazing revelation. Can we get a refund for all that wasted money on models?

Dr. Lurtz
June 15, 2013 7:15 am

As per previous “Scientific Publications”, they just realized that Ice Melt in the Antarctic is caused by Ocean Currents, not Atmospheric warming. How can you make a “model” when you don’t even know the basics of the physical processes???
Of course, one can always make a “statistical model” based on past information. BUT, as they say in Stock Prospective s, “past performance does not guarantee future results”.
When will these “scientists” wake up to the fact that CO2 is not a problem, but a solution to growing more crops, and it has very little to do with “Global Warming”. As per this blog, over half of the perceived temperature increases are due to the “Urban Heat Island Effect”. We will find out if the Sun is the remainder, since we are in a grand observation of a “Quiet Sun”.

johnmarshall
June 15, 2013 7:18 am

Seems to be a coupled system North down, South up and visa-versa. Could be due to multidecadal ocean current changes which we just don’t fully understand and seems to be a cycle of about 60-80years.. I can certainly live with that.

June 15, 2013 7:28 am

Thanks, Bob. Well said.
The NSIDC Sea Ice data can be seen at http://nsidc.org/data/seaice_index/

Count_to_10
June 15, 2013 7:32 am

Are we still talking about water-world models, or have they actually put continents, ocean currents, and mountains in yet?

Bill Illis
June 15, 2013 7:40 am

Good stuff, Bob.
The Arctic sea ice is the only variable which the models have got close to right. Obviously, it was just a fluke. When you only get 1 out of 13 key climate indicators right, it is by accident, not from good modelling.

June 15, 2013 7:50 am

Bob:
Why are these two graphs different, although they supposedly represent the same thing?:
No. 3 of this present article:
http://bobtisdale.files.wordpress.com/2013/06/figure-32.png?w=640&h=422
And No. 7 from this other article of yours (http://bobtisdale.wordpress.com/2013/02/28/cmip5-model-data-comparison-satellite-era-sea-surface-temperature-anomalies/)
http://bobtisdale.files.wordpress.com/2013/02/07-so-hem.png
Shouldn’t they be alike?

Otter
June 15, 2013 7:50 am

Perfect! I was just thinking about finding charts of both poles’ ice since 1979, and merging them together, just like figure 1. Got the idea from P. Gosselin’s site.

Reply to  Bob Tisdale
June 15, 2013 8:22 am

Bob said:
Nope. They’re not the same thing. One is the Southern Ocean (90S-60S), and the other is the Southern Hemisphere (90S-0).
Fool of me.

Jimbo
June 15, 2013 8:03 am

As illustrated in the most recent model-data comparison of sea surface temperatures, here, sea surface temperatures in the Southern Ocean have cooled, Figure 3, while the models say they should have warmed.

But I was told that melting the melting under the ice shelves caused Antarctica’s expanding sea ice!
http://dx.doi.org/10.1038/ngeo1767

Melt may explain Antarctica’s sea ice expansion
Climate change is expanding Antarctica’s sea ice, according to a scientific study in the journal Nature Geoscience.
The paradoxical phenomenon is thought to be caused by relatively cold plumes of fresh water derived from melting beneath the Antarctic ice shelves…….
http://www.bbc.co.uk/news/science-environment-21991487

Or are sea surface temps getting warmer in the SH summers? Ahhhhh. Grrrrrr.

Pamela Gray
June 15, 2013 8:19 am

I am well familiar with EKG’s and MRI’s. The premis is the same. Brains at rest are noisy with random synaptic activity. Brains that are listening to a signal or doing some kind of functional thinking will have a brainwave component rise out of the noise. I can build a model of this observation or I can simply measure it in-situ and let the software perform calculations on the data (basically adding and subtracting run that is picked up by the electrode “listeners” over and over again) till the noise is cancelled out (which would mean 100’s of runs and even 1000’s of runs). If I performed this test on thousands of different human beings using the same signal and I see a similar pattern, I could say that brain waves can be measured and are not random when a brain is doing some kind of specific task (listening, looking, reading, doing math, thinking about a certain subject, etc). At that point I no longer need the model. All I have to do is measure real brains. Unless the one I am measuring is mine. Apparently I have a non-random, very busy brain. A brainstem response to a click noise could not be located in the noise. My baseline brain at rest is too noisy and non-random.
Models must contain dialed in assumptions about observations that have been purposely biased by the modeler. In other words, in this case, the modeler believes (or is testing his/her belief) that intrinsic and extrinsic drivers of weather pattern variations are random, having no step functions or echoed affects. So they are built to demonstrate this assumption. The runs are completed to demonstrate this noise and its average anomaly. Which should eventually cancel to 0 if enough runs are completed. When they do, the modeler believes he/she has natural climate just right. The anthropogenic dial is a constant factor built to be void of noise and tuned to observations of a previous observed trend thought to be caused by some kind of anthropogenic activity. The dial is added to the model. The runs are done again and abracadabra, a rising anomaly. The result should be no surprise to anyone.
The problem here is that we have only one subject. Earth. And a short number of runs (if we are doing yearly runs, 17 runs is very limited). The result so far is that our modeled termperature response is not matching the current observation. In medical circles this would neither prove or disprove a model of brain activity. Maybe Earth is like Pam’s brain. Very busy and non-random.

Pamela Gray
June 15, 2013 8:27 am

TYPO: (basically adding and subtracting runs that are picked up by the electrode “listeners” over and over again)

Nik Marshall-Blank
June 15, 2013 8:42 am

IPCC AR5 is going to look like a joke without a funny ending. I’m 99% certain it will say we’re melting when everybody in the world will want cheaper heating and lots of it too.

John F. Hultquist
June 15, 2013 8:52 am

Bob Tisdale says:
June 15, 2013 at 7:36 am
“Bloke down the pub: I haven’t read the paper associated with that post. But the press release doesn’t mention warming ocean temperatures, as far as I could see.

While the full paper may add something about warmer water, I don’t see that in the press release either. There is major MSM hype each time a big chunk of ice shelf gets a crack and then floats away. This research says a greater proportion of ice melts from the under side in contact with water. That is not surprising and it doesn’t make for good TV images of Manhattan sized bergs.
~~~~~~~
@ johnmarshall
“Seems to be a coupled system . . .
Looking at Bob’s Figure 1, it is easy to leap on to this wagon. At best, I’ve only got another 15 to 20 years so I hope someone figures this out before 2073.
~~~~~~~
Thanks, Bob.

Tom Jones
June 15, 2013 8:54 am

It is worth reading Robert G Brown’s treatise over at Judith Curry’s blog. He sometimes writes here as rgbatduke. He is of the opinion that taking the average of the CMIP ensemble is silly, that it has no meaning. The average of garbage is just average garbage.

Paul Vaughan
June 15, 2013 9:02 am

Idea for a future article:
How well do the models do at simulating equator-pole temperature gradients?
Suggestion: Break the analysis down geographically — e.g. western ocean boundaries (where gradients are steep) vs. eastern (where gradients are diffuse), etc.

Keith Gordon
June 15, 2013 9:06 am

A very interesting article from Bob, precise and to the point as always, shows just how bad the models are at forecasting,
According to my reckoning, there are over 1 million Sq. Km more sea ice in the Arctic than this time last year, and more than the last few years.
http://arctic.atmos.uiuc.edu/cryosphere/timeseries.anom.1979-2008
This may or may not continue, but the air temperature North of the 80th parallel is below average also.
http://ocean.dmi.dk/arctic/meant80n.uk.php
I know Joe Bastadi was forecasting a recovery, is this what we are now seeing, I wonder if this model forecast, the one they got right, is going to suffer the same fate as all the other “fails”.
Regards
Keith Gordon

Admin
June 15, 2013 9:20 am

Thanks Bob

Go Home
June 15, 2013 9:42 am

Bill Illis says: “When you only get 1 out of 13 key climate indicators right, it is by accident, not from good modelling.”
What are the 12 that have turned out wrong. It would be nice to have a summary list of all the model predictions and note which are still in play and which have already failed miserably.

DirkH
June 15, 2013 9:51 am

Tom Jones says:
June 15, 2013 at 8:54 am
“It is worth reading Robert G Brown’s treatise over at Judith Curry’s blog. He sometimes writes here as rgbatduke. He is of the opinion that taking the average of the CMIP ensemble is silly, that it has no meaning. The average of garbage is just average garbage.”
Assuming that a model run’s temperature time series is the realized temperature time series plus a noise component, and that the noise is normally distributed and each model run has an indipendent noise component, we could reduce the noise component’s amplitude by the root of the number of model runs; so for 100 model runs we could reduce the noise component by an amplitude factor of 10.
But, as I have repeatedly tried to explain, the deviation between the real system and a simulation of the real chaotic system with an iterative model of finite resolution leads to a deviation between real system and simulation that grows beyond all bounds over time. (I said exponentially in an earlier comment on another thread but that is imprecise; the correct definition is it grows beyond all bounds).
When the error grows over time beyond all bounds, it is clearly not ordinary normally distributed noise; and even reducing it by an amplitude factor of 10 only delays the growth beyond a given bound by a CONSTANT time.
Gavin’s argument is therefore bunkum.

scarletmacaw
June 15, 2013 9:55 am

Tom Jones says:
June 15, 2013 at 8:54 am
It is worth reading Robert G Brown’s treatise over at Judith Curry’s blog. He sometimes writes here as rgbatduke. He is of the opinion that taking the average of the CMIP ensemble is silly, that it has no meaning. The average of garbage is just average garbage.

There are two processes here.
If one takes a single model with random events built in, one can run it 1000’s of times to get an average where the random noise cancels out and the forcings are left, as Gavin suggested (or one could just turn off the random events and run it once). The spread between runs gives a reasonable estimate of the range random events play in modifying the forcings.
However, I agree with RGB that averaging 50 or so different models is meaningless. Each model should be individually compared to reality, and any that do a poor job should be discarded. When the IPCC plots a bunch of models and then implies that they are good because the actual measurements fall within the spread, they are spouting nonsense.

Lars P.
June 15, 2013 10:43 am

Tom Jones says:
June 15, 2013 at 8:54 am
It is worth reading Robert G Brown’s treatise over at Judith Curry’s blog. He sometimes writes here as rgbatduke. He is of the opinion that taking the average of the CMIP ensemble is silly, that it has no meaning. The average of garbage is just average garbage.
Tom, he has posted several posts here on this subject, you may want to read through those:
http://wattsupwiththat.com/2013/06/13/no-significant-warming-for-17-years-4-months/#comment-1334821
“Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!”
I think the post at Judith Curry’s blog is a continuation of the discussion as one reader asked Judith to publish it (see comment posted by RayG)

Greg Goodman
June 15, 2013 11:01 am

Bob, at OT this but you may find it relevant.
You may have seen my extension of Willis’ volcano stack idea:
http://climategrog.wordpress.com/?attachment_id=278
I then did a cumulative integral to estimate the volcanic effect once the natural cycles have been removed. (Much of what is usually attributed to volcanic cooling is false attribution of natural cycles).
http://climategrog.wordpress.com/?attachment_id=310
Then I realised that the pattern that comes out after the eruptions is the same as before I flattened it with the integral. That means that after the volcanoes the natural cycle is exaggerated. The climate response is to increase the natural variability.
It then hit me that this is proof of your idea that increased swings (ie the non ENSO neutral years) can inject energy into the climate system.
In the case of volcanoes we see tropical regions are not linear but controlled “like the body temperature of a mammal” as SteveF said recently 😉
I’ve posted a fuller account of all this at the Blackboard but moderation is slow over there. http://rankexploits.com/musings/2013/estimating-the-underlying-trend-in-recent-warming/
regards.

Lars P.
June 15, 2013 11:04 am

Model-Data Comparison: Hemispheric Sea Ice Area
Posted on June 15, 2013 by Bob Tisdale
Thanks Bob for yet another model data comparison, putting the finger on another issue where it pains!
The models need a general overhaul before using any of the data even for estimations, and indeed as rgbatduke said a great deal of those should find themselves in the history archive as erroneous tries failing to model the climate.
A question to the modellers:
To my limited model understanding I think models (GCM) use the CO2 forcing in the same way they use the cloud forcing as a backradiation from cloud atmospheric level – where a certain parameter (+3.7 W/m2 by CO2 doubling is used)
Is this done this way?

David L. Hagen
June 15, 2013 11:08 am

See the excellent post by rgbatduke where he shreds current GCMs and “climate scientists” for their abject failures to model the real world

He observes that “a semi-empirical method” turns out best for modeling the physics of carbon, despite valiant efforts by physicists to model using supercomputers. That is still “trivially simple (in computational terms) compared to the ” “set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.” . . .First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever by not computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. . . .
one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. . . .
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time.

An eloquent devastating critique of current climate models which are so far biased hot as to give meaningless results.
Tisdale has clearly shown a major failure of climate models to predict antarctic ice.
Back to the drawing boards.
Nicola Scafetta’s empirical natural oscillation dominated projections appear to be predicting temperatures since 2000 far better than the current menagerie of IPCC endorsed “Global Climate Models”.
Let’s use what works fairly well while the GCM mess has been sifted, weighed and found wanting.
Restore “climate science” to the world of “hard science” based on physics with grants to the best to improve performance measured against hard reality.
not lemming driven politically motivated prognostications fed by billion dollar troughs.

Doug Proctor
June 15, 2013 11:48 am

This might be a good time to address the following point:
When we look at an ensemble of outcomes, i.e. Scenarios, we see the variability dependent on specific situations that arise, the various situations representing either the noise or the potential variation in important parameters. The observations we receive represent one, specific situation, which involves both fundamental, unchanging aspects, i.e. radiative forcings of various kinds, and specific instances of the variables. What we see may not be the mean, though, but one of the recognized low potential Scenarios.
In other words, when we see the observations from 1979 to 2013 match the lowest IPCC Scenario, close to “C”, we see that observations come closer to the 5% chance, but that does not mean that the mean is incorrrect. What happened is 100% by occurrence, but was recognized as 5% by procedure. We could also have had the top 5%, i.e. Scenario A+, without the mean being incorrect. Each 5% would simply indicate that the variables, not the fundamentals, conspired to produce what they did. Again, the results do not invalidate the mean.
The question we must answer is, what caused the situational outcome as observed, the variables, incorrect fundamentals or a combination of both? Going forward, moreover, we need to understand the basis of the prior Scenarios: is there enough variability in noise and variables to take us from where we are in 2013 to the endpoint of Scenario A in 2100?
This is something I have spokien to a number of times: why do we continue to show the history from 1979 or so on the projections from the AR series from the same date? Should not the AR projections always restart at the end of the current observational data?
The only way forward I can see is that the Scenarios have the variability to go from the present 2013 to the endpoint of Scenario A or C in 2100. This must be the position of the climatologists with the IPCC. I don’t believe it is true, but it is the only way I see to justify Scenario A at this time: somehow we must be able, within the IPCC mathematics, to jump 3C in the next 87 years. As well, the melting of continental glaciers must be able to increase within the IPCC math to crank sea level rise to about 20 mm/yr towards the end of the century. If the variability of the IPCC processes of climate change do not have that ability – as the science is said to be much more deter inistic, much less probabilistic (which is why they can claim that CO2 is the primary driver of heating) that such a rapid end-result change indicates, then nobody can place observations on the AR4 or 5 Scenarios graph. Some of the Scenarios are simply impossible to occur by 2100.
Your opinion would be appreciated.

Jimbo
June 15, 2013 12:17 pm

See the IPCC’s graphic on Arctic sea ice extent anomaly going back to 1974 up to 1990. I wonder whether it will be in the latest IPCC report due out later?
http://stevengoddard.files.wordpress.com/2013/06/screenhunter_170-jun-15-11-10.jpg
http://stevengoddard.wordpress.com/2013/06/15/ignoring-inconvenient-arctic-data/

Berényi Péter
June 15, 2013 12:29 pm

“In summary, we are definitely not interested in the models’ internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions. So, in the graphs, we exclude the visual noise of the individual ensemble members and present only the model mean, because the model mean is the best representation of how the models are programmed and tuned to respond to manmade greenhouse gases.”
This comparison is unfair, you are comparing an average response against an individual run of a unique physical instance, apples to oranges.
The correct way to do it is to spawn scores of replica Earth systems and measure their responses to the very same carbon dioxide emission scenario, then take the ensemble average and compare that to ensemble average of computational climate model runs. We are not interested, after all, in the climate system’s internally created noise, and we are not interested in the results of individual responses of ensemble members to initial conditions, are we?
There, you have it. Admittedly, it may take some time to realize this program and it costs money to build many identical specimens of Earth, but hey, this is the way science is done.
Until it is done this way, properly, computational climate models can’t be falsified. And, if they are not falsified, they must be true, right?
Wrong. In traditional scientific practice the logical status of a theory is indeterminate until it survived several actual falsification attempts. If the theory is designed in a way that no such procedure can be carried out ever, it lays outside the realm of science, belongs to metaphysics, not physics. That’s the current epistemological status of computational climate models.

Hoser
June 15, 2013 1:00 pm

Pamela Gray says:
June 15, 2013 at 8:19 am

” Very busy and non-random” brain activity.
Experimenter bias.

Eliza
June 15, 2013 1:01 pm

Keith Gordon: The trouble is that whenever NH seesm to be going the wrong way for the team the tend to “freeze” the graphs… take a look at DMI. They are usually very timely when the scenario goes the teams way (down). Polar Ice melt seems to be stalled, in fact tending to up again and guess what its now 4 days since the graph has been looked at. They are probably wondering what to do LOL

Janice Moore
June 15, 2013 1:03 pm

[ab initio — Bob Tisdale did not ask me to (likely would prefer I not!) do this]
SUPPORT OUR BOB!
While being kicked by a donkey is no insult, it still hurts.
If you have time, read the following and let Mr. Tisdale know what YOU think of his competence and integrity.
http://bobtisdale.wordpress.com/2013/06/15/i-dont-like-being-called-a-liar-fabricator-or-data-manipulator/#comment-11773

Janice Moore
June 15, 2013 1:06 pm

AAack! I messed up the above link! (it lands on my comment, oh, brother)
Please, just SCROLL BACK UP THE PAGE on Bob’s site (linked above) and read HIS post.

Janice Moore
June 15, 2013 1:21 pm

Well done (again!), Mr. Tisdale. Your graphs (esp. figure 1, here — superCOOL!) are “worth a thousand words.”

Greg Goodman
June 15, 2013 1:51 pm

This would be an interesting thing to check in the models: relationship of atmospheric CO2 to SST:
http://climategrog.wordpress.com/?attachment_id=233
http://climategrog.wordpress.com/?attachment_id=223
Now accepting that their individual models “internal noise” variations look different we’d need to look at individual models.
Now my guess is that they’ve hardwired the CO2 level to human emission “senarios” and their “random” internal variations will be uncorrelated to SST.
Now if they have not got the relationship between the two primary variables that they are shouting about at least a little bit right, all bets are off.

Espen
June 15, 2013 2:57 pm

Bob, while SSTs in the Southern Ocean seem to be heading downwards, what about heat content? I can’t find any newer update on Southern Ocean OHC on your site than this one which is only up to December 2011. It would be great if you could prepare a chart which includes the newest data!
http://bobtisdale.wordpress.com/2012/01/26/october-to-december-2011-nodc-ocean-heat-content-anomalies-0-700meters-update-and-comments/

goldminor
June 15, 2013 3:01 pm

With the increased sea ice extent in Antarctica, how long should it take for that extra cooling to circulate and cool larger sections of the global oceans?

Crispin in Waterloo
June 15, 2013 3:19 pm

Bob, I keep referring people to your excellent work here at WUWT. It is really the best analysis around. The quality of the comments today is also top notch.
Go team!

Jimbo
June 15, 2013 4:16 pm

When is the Antarctic terror melt going to slow down? It’s worse than I thought!

15 June 2013
“New Study Shows Antarctica Ice Is Melting 70% More Slowly Than Thought”
http://notrickszone.com/2013/06/15/new-study-shows-antarctica-ice-is-melting-70-more-slowly-than-thought-another-scare-bites-the-dust/

Brian H
June 15, 2013 4:53 pm

Tom Jones says:
June 15, 2013 at 8:54 am
It is worth reading Robert G Brown’s treatise over at Judith Curry’s blog. He sometimes writes here as rgbatduke. He is of the opinion that taking the average of the CMIP ensemble is silly, that it has no meaning. The average of garbage is just average garbage.

Indeed. Averaging multiple runs of multiple models necessarily averages the choice of parameters (selection and settings), which is inherently meaningless. Especially since the point of having different models is partly (mostly?) to distinguish the value of making various assumptions (preferably as few as possible). One can understand rgb’s agony contemplating the conceptual mess which averaging them creates (but is ignored).

June 15, 2013 4:53 pm

Bob, I recently tried to find the satellite era Antarctic Sea Ice minimum extent/area data, in order to compute a trend as we are endlessly bombarded with the trend in the Arctic sea ice minimum.
I was unable to locate the data or any source that had computed or graphed the trend.
The closest I could find was this page that shows the 1979 to 2000 average and individual years thereafter. Although note 1999/2000 should be included already in the pre-2000 average.
http://earthobservatory.nasa.gov/Features/WorldOfChange/sea_ice_south.php
Post 2000 minimums are roughly 350,000 sq km higher than pre-2000 average. So there is a significant trend, but it would be nice to see it graphed up, if you have the data.
regards

Janice Moore
June 15, 2013 4:54 pm

Thanks for the link to P. Gosselin’s excellent site, Jimbo. Great article.
******************************************************************************
Well, looks like A-th-y is taking a well-deserved Father’s Day break. Please forgive this being TOTALLY OFF TOPIC, but I wanted to pay tribute to the dads of WUWT somewhere! You are appreciated!
!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!*!
Whether you’re a biological or adoptive or step or foster or father-figure dad, if you have ever been a dad to someone in some way, this is for you. [Note: When I use the terms “dad” and “mom” below I mean genuine, loving, parents, not emotionally or physically abusive or absent parents, regardless of their biological contributions. Some biological parents do not deserve to be called, not ever, “Mom” or “Dad.” Forgiving does not mean pretending.]
From: Your kid(s), whoever they are
To: You

A Dad Is There When It Matters
A mom is “always there” (sometimes, moms are “there” waaay too much!),
but a dad is there
when it matters.
Some dads are gone a lot. They may be in the military or have to travel for their job or just have to work long hours. They may miss the school play or the ball game or the birthday party. But you had all the glitter or fabric (or duct tape) you needed for that costume or the money for that mitt or those shoes or that bicycle because of all those long hours.
Dads don’t usually talk a lot. They may have a lot on their mind, or they may be worried about money or work, or they may just be tired. Some dads just have a hard time saying what’s on their mind, even more, what’s in their heart.
But, when something needs to be said, they say it. In the June/July “Reminisce” magazine, one long-ago bride remembered that just before she walked down the aisle on her dad’s arm on her wedding day, he whispered, “It’s not too late. You can still change your mind.” Another bride in that same magazine remembered her dad calling a halt to the nonsense of a very long “you may kiss the bride” interlude: “Is that really necessary!?” Dads don’t put up with a lot of nonsense – and that’s good. Think how many of us would be maimed or dead if dad hadn’t said: “Quit horsing around!” or “Get down from there, NOW!” Dads are good at saying important things like: “Stand on your own two feet and look them in the eye;” or “When you’re kicked by a jack-ass, consider the source;” or “Keep pedaling! … Keep pedaling! Look up! Keep pedaling!” (which of course applies to much more than just learning how to ride a bike).
Some of them never say, “I love you,” but you could see it … if you took the time to watch them.
Of course, most kids don’t. When we have all the time in the world to learn more about our dads, we are too busy living our own lives. We may think they didn’t care much, but, really, I think we just didn’t notice much of what they did that proved they cared, cared very much, indeed.
In the movie “Fiddler on the Roof,” when his second daughter, Hodl, is sitting in a tiny station, waiting for the train to Siberia, Tevye is there. His customers, for once, will have to wait. He doesn’t talk a lot, but he is there. As the train pulls away for the distant East, Tevye asks God simply to, “See that she dresses warmly.” When your dad told you to: “Wear a helmet. WEAR a HELMET!” or “Get the oil changed in your car for once!” or “Eat something. Do you eat? What do you eat? You look like you don’t eat anything;” or “Stop hanging around with those jerks. You’ll end up in jail or worse;” or “Do your homework – now!” or “So help me, if I catch you doing that again, I’ll pound the living daylights out of you;” he was really saying, “I love you. I care.”
Dads are fun! Most of them know how to play. They don’t take things too seriously (unless it really is serious). Moms tend to get way too serious. We may have nearly drowned in the middle of it, but we had fun! Dads know how to live. What is life without adventure?
If you know where to look, it’s easy to see a dad’s love. All a kid has to do is think about all the nice things dad could have had if he hadn’t been spending his money on cotton candy and tickets to the carnival and team uniforms and summer camp. And you thought doing that made him happy. Well, it did. Because you mattered more than anything in the world to him. He just wanted you to be happy. His new shoes could wait.
Because he loved you.
Real dads care and, if it is humanly possible,
real dads are there –
when it matters.
HAPPY FATHER’S DAY!

mobihci
June 15, 2013 5:16 pm

How much should we trust models of any type?
this comes down to the level of understanding of the subject matter by the creators of the models.
looking at the difference between the various peer reviewed model outputs (tropospheric temps), it is obvious that the level of understanding is nil. the range is greater than the overall change that is supposed to eventuate. this is a negative understanding ie proof of creator bias as the determining factor, not the input variables. the bias is an easy one to determine too, each model heads up.
the mean shows no understanding, and should be treated as such.
the world leading scientific body on the matter willing to use such crap as their ‘selling’ point is an indication of how poor the level of education and critical thinking has become.

Eugene WR Gallun
June 15, 2013 5:29 pm

Am I getting these things right?
!) A paper says an ice shelf melts mainly at the bottom due to the water beneath it, not at the top due to air temperature?
2) Another paper says that the Antarctic ice sheets that are melting are melting due to the arrival of warmer ocean currents beneath them?
3) Switching to Arctic ice am I remembering correctly about reading something here about a dramatic Arctic ice loss of many years ago that then researchers were blaming on a shift in the Gulf Stream sending measurably warmer water into the Arctic regions?
If i got the things above correct then what is most probably causing the today’s loss of sea ice in the Arctic? Certainly not warmer air temperatures? A shift in currents? if the currents have shifted would another place be getting colder?
At the age of 65 I must say — expiring minds want to know!
Eugene WR Gallun.

June 15, 2013 7:05 pm

We are suppose to be interested in the ice because of the ability of ice to reflect the warming sunlight, correct? “Albedo” is all but a magic word for many Alarmists. However ice has no ability to reflect sunlight at night, and therefore things get more complex than simply saying, “There is less ice at the poles, so less sun is reflected, so it will be warmer.”
The way the word “albedo” is flung around makes it unclear whether it refers to the potential an object has to reflect light, or the actual light reflected. For example, a paper will say that freshly fallen snow has a high albedo of .9 (90%) Does that mean it is only .9 when the sun is shining, or does it have the same albedo when the sun has set and it is reflecting zip-zero?
For albedo to be a meaningful word, it seems that it should sink to near zero for all objects, whether white or black, after the sun goes down. However perhaps the albedo of freshly fallen snow remains at .9, because it is still reflecting 90%, even if it is only 90% of incoming starlight.
If the latter is the case, then we need a new word. (If it already exists, I don’t think laymen use it.) I propose the word be “calbedo,” named after me, so I can become famous, and rich enough to hire Bob to do some graphs I’m curious about.
I’d like to see the globe divided into stripes of latitude, such as the area between the pole and 80 degrees, and the area between 80 degrees and 70 degrees, and so forth. The farther you got from the pole and closer you got to the equator, the higher the sun would be in the sky, and the greater the power the ice would have. The ice would have more power because the sunbeams were more intense, and so would be the reflections. In the same way the ice in June would have more power than the ice in September.
The word “calbedo” would emphasis the power of the Antarctic ice, which extends quite far from the south pole. They have ice that, if you flipped the globe, would be like having ice clear south to the north coast of Scotland on the first day of summer.
The word “calbedo” would also emphasis how little power the ice at the north pole has, by the time the records are set and Alarmists are freaking out, in September. By then the sun is about to set on the pole and getting quite low at the arctic circle. (Also another weird factor is coming into play up north in September: Once the sun is sitting near the horizon, open water starts to have a higher albedo than ice does.)

Greg Goodman
June 16, 2013 1:17 am

Philip Bradley says: “Bob, I recently tried to find the satellite era Antarctic Sea Ice minimum extent/area data, in order to compute a trend as we are endlessly bombarded with the trend in the Arctic sea ice minimum.”
Rather than producing an equally meaningless “trend” for the other end of the planet from one day per year series ignoring 364/365th of the available data, maybe we should be using ALL the daily data to show the complementarity of the poles.
http://climategrog.wordpress.com/?attachment_id=206
or if we are interested in change we should be looking at the change directly not trying to guess it by looking at area time series:
http://climategrog.files.wordpress.com/2013/03/ddt_arctic_ice.png
Fighting fire with fire can sometimes be effective. Fighting stupidity with more stupidity tends to legitimise the former.
There’s much enlightenment to be found in ice cover data if we use ALL of it. Using just one day per year is egregious cherry picking at its worst. This plot has some links you may find useful.
http://climategrog.wordpress.com/?attachment_id=226
spectral analysis reveals some interesting patterns too
http://climategrog.wordpress.com/?attachment_id=216
There’s lots of interesting information in that, if only could get away from the obsession with unrepresentative hype like the annual minima, we might actually learn something about what is happening.

mwhite
June 16, 2013 2:17 am
June 16, 2013 5:29 am

Gavin Schmidt replied with a general discussion of models:
Any single realisation can be thought of as being made up of two components – a forced signal and a random realisation of the internal variability (‘noise’). By definition the random component will uncorrelated across different realisations and when you average together many examples you get the forced component (i.e. the ensemble mean).
================
chaos cannot be averaged in this fashion. noise can be averaged because it is randomly distributed positive and negative. the law of large numbers tells us that as the sample size increases, the noise will average out to zero.
however, it is well recognized that weather is chaotic. when you average weather to get climate you are not averaging noise, you are averaging chaos. while chaos looks like noise, it is not. chaos is not subject to the law of large numbers. it lacks the constant mean and deviation required for the law of large number to hold.
thus, when you try and average chaos over time it does not average out to zero. rather it wanders in an unpredictable fashion. this leads to spurious (false and misleading) trends when you try and do regression analysis (fit a trend line) on the data. What looks like a real trend (warming or cooling) is simply the orbit of the system around it attractors. not real trends at all as we typically think of them. rather cycles that never repeat identically. snowflakes that all look similar, yet no two are the same.

June 16, 2013 5:56 am

Pamela Gray says:
June 15, 2013 at 8:19 am
The runs are completed to demonstrate this noise and its average anomaly. Which should eventually cancel to 0 if enough runs are completed.
==============
however, in a chaotic system, the “noise” only averages out to zero at infinity.
the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.
however, you cannot average chaos in this fashion except over very short time periods when the system is locally orbiting an attractor. the longer the system runs, the more likely it is to wander off towards another attractor, rendering your carefully calculated average meaningless.
For example, when you look at the earth’s average temperature. Over the past 80 years you will get one number, but if you increase the time period 8 thousand years to include all of the Holocene, you will get a higher number, but if you further increase the time period 80 thousand to include the previous ice age you will get a lower number. If you increase the time period to 80 million years, you will get a higher number.
which of there four numbers is the correct average temperature of the earth? they are all different. which one is the correct? if they are all correct, how can we calculate a meaningful average? That is the problem with chaotic systems. As you increase the scale you get a different result. And it keeps changing all the way out to infinity.

Lars P.
June 16, 2013 6:15 am

Doug Proctor says:
June 15, 2013 at 11:48 am
When we look at an ensemble of outcomes, i.e. Scenarios, we see the variability dependent on specific situations that arise, the various situations representing either the noise or the potential variation in important parameters. The observations we receive represent one, specific situation, which involves both fundamental, unchanging aspects, i.e. radiative forcings of various kinds, and specific instances of the variables. What we see may not be the mean, though, but one of the recognized low potential Scenarios.
In other words, when we see the observations from 1979 to 2013 match the lowest IPCC Scenario, close to “C”, we see that observations come closer to the 5% chance, but that does not mean that the mean is incorrrect. What happened is 100% by occurrence, but was recognized as 5% by procedure. We could also have had the top 5%, i.e. Scenario A+, without the mean being incorrect. Each 5% would simply indicate that the variables, not the fundamentals, conspired to produce what they did. Again, the results do not invalidate the mean.

Not sure what are you calling the “C” IPCC scenario? Would help if you relate to that.
“that does not mean that the mean is incorrect”
How do you come to that conclusion?
” We could also have had the top 5%, i.e. Scenario A+, without the mean being incorrect.”
I doubt very much it could be that way.
To my understanding the scenarios are not different runs with the same inputs with different outcomes, but different inputs in the parameters.
How much CO2 emission did really occur? Do the scenario’s vary with the CO2 output then your logic is very wrong.
To my understanding the human CO2 emissions have exceeded all scenarios, when the temperature has underperformed all.
This shows a total disconnect between the scenarios and nature. Remember the scenarios do not have the physics inside and run. The scenarios have functions that according to the scenario programmers best emulate the result of a combination of different processes, some of which are not yet understood and not tested in practice.
One can run a model many times. If none of the runs emulates the current temperatures, the model is to be scrapped and should not participate in computing the mean.

Lars P.
June 16, 2013 6:17 am

sorry messed up “models” with “scenarios” above, but you get it…

goldminor
June 16, 2013 9:45 am

Bob Tisdale says:
June 15, 2013 at 4:09 pm
You really should learn how to use the KNMI Climate Explorer:
——————————————————————————–
That looks like a great tool. There is so much reading and info to assimilate though. I would like to restart my math skills, also. That will be a major endeavour. Math was my strong suit back in my school days, mostly A,s. I could use that level of mental exercise right about now. On my SAT in the 60s, the math side was about 30 points higher than my reading comprehension.

Douglas Proctor
Reply to  goldminor
June 16, 2013 10:16 am

*Lars P.*
*The model mean is simply the result of looking at all possible outcomes and finding the most common, average-not-extreme path. IF any actual event could be an extreme, the 5% event, the model mean is still real in a mathematical sense. Going forward, however, the model mean only has future meaning if all Scenarios going forward still have the same probability of happening as they did in the beginning. That means that we could still go to +3C in 2100 from today.*
*I’m arguing that PROCEDURALLY what AR5 FIgure 4 as we have seen is correct, and the mean is just one of the possible outcomes, though statistically closer to what is likely to happen than the outside: Scenario C being the low outside. REPRESENTATIONALLY, however, global temperatures tracking at the low end may say that it is incorrect to consider it just a low-probability event that actually occurred (the 1 in 20 poll that was bizarre).*
*Now that we have seen the northern and southern ice behave as they have, although this situation may have been one of the outcomes in the IPCC story, to get to a world-wide flood, we have to change this current situation signficantly. What Scenario does this? Do ANY of the IPCC models take us along the path we have taken AND get us to the Deluge?*
*I do not argue about the math. That argument seems futile, as the PROCEDURE is neither correct nor correct in itself, it just is. The procedure may be, and is, inappropriate to tell us what will happen next, however.*
*The big question is, how do we get to there from here? Can we, and if we cannot, then why are we still looking at Scenario A with its loss of polar ice and the drowning of continental margins?*

phlogiston
June 16, 2013 11:11 am

Thanks Bob – figure 1 is as good a signature of the bipolar seesaw as I have seen. The bipolar seesaw could according to some (e.g. Tzedakis) signify the approaching end of the current interglacial.

phlogiston
June 16, 2013 11:14 am

See also this Tzedakis paper: http://www.clim-past.net/8/1473/2012/cp-8-1473-2012.pdf
which was the subject of a thread here by William McClenney:
http://wattsupwiththat.com/2012/10/02/can-we-predict-the-duration-of-an-interglacial/

Pamela Gray
June 16, 2013 2:06 pm

Fred, agreed. But for the purposes of 200 years out, we don’t need to go back 20 zillion years. Maybe 400. Maybe less. But certainly not just 60 years, or even 80 years. That would not be enough to take into account all the possible natural intrinsic drivers of weather pattern variations.

June 16, 2013 4:38 pm

ferd berple says:
“the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.”
The problem is that weather is not merely internal variation, but is largely externally forced by short term solar factors. That is exactly why it is meaningless to average out 30yrs worth to look for a CO2 signal. With extensive hind-casting and 5+yrs of producing solar based weather forecasts, I can guarantee that weather is far from chaotic, and is highly predictable.

June 16, 2013 5:18 pm

Greg Goodman says:
June 16, 2013 at 1:17 am
Rather than producing an equally meaningless “trend” for the other end of the planet from one day per year series ignoring 364/365th of the available data, maybe we should be using ALL the daily data to show the complementarity of the poles.

I disagree. With sea ice the same cause, say cloud changes, can have effects with the opposite sign between summer/winter and day/night. And looking at minimum and maximum effects helps differentiate between summer and winter effects.
For example, comparing Arctic sea ice min and max area/extent changes shows the loss of ice is wholly a summer effect. No amount of manipulating 365 days data would show you that.
————————————————–
Thanks Bob. I think it gave me the data I wanted.

phlogiston
June 17, 2013 2:12 am

Ulric Lyons says:
June 16, 2013 at 4:38 pm
ferd berple says:
“the problem in the climate models is they are based on a faulty mathematical assumption. they assume that weather (chaotic) will average out over 30 years into something that is not chaotic and thus might be predictable.”
The problem is that weather is not merely internal variation, but is largely externally forced by short term solar factors.
Its a mixture of the two – the system is likely to be a weakly forced nonlinear oscillator – or set of oscillators. There is external forcing, also internal nonlinear dynamic. Due to weakness of the forcing it might be hard to impossible to resolve the forcing signal from the emergent wavetrain – at least using traditional methods. (Strong forcing means that you have a regular monotonic signal, like summer-winter, spring and neap tides. We don’t see this, thus the forcing is weak and complex.)

June 17, 2013 6:25 am

@phlogiston
The external forcing is an event series, not oscillatory, it is very strong, and is directly responsible for most short term land temperature deviations and teleconnection statuses including ENSO.

goldminor
June 17, 2013 12:14 pm

@ Bob Tisdale…I notice that once again the daily sst information did not come out. Do they close on Sundays now? The last pic I have is from the 15th. There is no 16th and today is the 17th. This also happened last week.
Also, that Arctic sea ice line is sure staying high as compared to last year. Maybe I should have stuck with 6.0+ as my prediction.

goldminor
June 19, 2013 4:08 pm

Interesting, the Unisys sst chart for the 17th was skipped and the 18th shows quite a change around southern Greenland. That is the second missing day of data this month.