Lakes For Sale, Partially Thawed, N=20

Guest Post By Willis Eschenbach

Anthony pointed out the selling of overhyped claims of the “dramatic thinning” of Arctic ice here. The title of the underlying scientific study is much more prosaic, Response of ice cover on shallow lakes of the North Slope of Alaska to contemporary climate conditions (1950–2011): radar remote-sensing and numerical modeling data analysis.  (PDF). To their credit, the authors make no such claims of drama in their text, which is generally quite appropriately restrained.

Here is their complete “dramatic” dataset of the lakes around Barrow, Alaska, the northernmost point in the US:

percentage barrow lakes partially thawedFigure 1. Percentage of lakes in the low-lying tundra around Barrow, Alaska that are partially thawed in late April, 1992-2011. Photo Source.

It’s an interesting study. They noted that partially thawed lakes look very different on radar than when the same lakes are frozen solid. As a result, they’ve collected solid data that is not affected by urban warming. So … what’s not to like in the study? Let me start with what is to like in the study.

I do like the accuracy of the measurements. It’s an interesting metric, with very objective criteria. I like that they listed the data in their paper, and showed photos for each of the years. I like that they didn’t try to project the results out to 2080.

What I didn’t like is where their study went from there. After collecting all that great data, they immediately sent out for that perennial favorite, a global climate model … not my style at all.

So rather than pointing out that their study is models all the way down, I figured I’d just show the kind of analysis that I would do if I were handed the lake thawing data.

First thing I’d need for the analysis? MORE DATA. Piles and piles of data. So I went out and I dug up two datasets—Barrow temperature, and Barrow snow depths. I started with just the temperature, but it turns out that the correlation between temperature and the lake thawing isn’t all that good. It doesn’t explain much, the best correlation is with temperatures in December, 4 months prior to the thawing, at a correlation of 0.68. However, at least it gives a good idea of what’s been going on, because we have good records clear back to 1920.

how cold is winter in point barrowFigure 2. Winter temperatures in Point Barrow (pale blue line) and the 17 year Gaussian average of the data. Photo source http://www.panoramio.com/photo/63484316

I note in passing that Barrow has a well-documented Urban Heat Island that is at its strongest in winter … and despite that, the 1930s and 1940s both had warmer winters than the last decade. I also note in this context of winter-business-as-usual that the study says:

Climate-driven changes have significantly impacted high-latitude environments over recent decades, changes that are predicted to continue or even accelerate in the near future as projected by global climate models …

… but I digress.

So the next obvious suspect for a correlation with the lake thawingis the snow depth. It’s an odd fact of nature that snow is a good insulator. It both slows down heat transfer by insulating the surface, and it keeps the wind from contacting the ice.

So I looked at the average snow depth data (scroll down to “Custom Monthly Listing” in sidebar) … but it’s not all that good at emulating the ice thawing either—in fact it’s worse. With snow depth, the best correlation with average snow depth is only 0.51, again with December coming out on top. So, having investigated single variables to try to emulate the lake thawing, I turned to the combination of snow depth and temperature … not much luck there either. In fact, the only way I could get a good correlation was to use the combination of the Nov-Dec-Jan average temperature, and the December snow depth. This gave me a correlation of 0.81, and a p-value of 0.001 … which turns out to be just barely significant. Here’s the emulation:

emulation barrow lake thawing shortFigure 3. Emulation of Barrow lake thawing. Observations (thick red line) compares well with the emulation (thin green line). Correlation is 0.81, p-value is .0010.

Now … why did I say that a p-value of 0.001 is “barely significant”, when the usual level is a p-value of 0.05? Well … because I looked at so many possibilities before finding what I sought. All up, I looked at maybe 40 possibilities before finding this one. If you want to establish significance at the level of a p-value of 0.05, and you look at 40 datasets before finding it, you need to find something with a p-value less than 1-10(LOG(0.95)/N, where N is the number of datasets you looked at. For N=40, that gives a required p-value of better than 0.0013 … so with a p-value of 0.0010, my emulation just made it under the wire.

Next, I looked at what that same emulation would look like over the whole period 1950-2013 for which we have records, and not just the period 1992-2011 of the study (the “N=20” of the title). Figure 4 shows that result.

emulation barrow lake thawing longFigure 4. Exactly as in Figure 3, but covering the entire period of record.

OK … not a lot going on there. Now, those who follow my work know that I’m quite skeptical of this kind of modeling, particularly with such a short record. What I do to test that is first to find a model with an acceptable p-value. Then I take a look at both the emulation shown above, along with the same emulation using just the first half of the data to fit the parameters, and then the same thing using just the second half of the data. Figure 5 shows that result:

emulation barrow lake thawing long plusFigure 5. As in Figure 4, but showing the emulation based solely on the first half of the data (light blue), and that based solely on the second half (dark blue)

As emulations go, in my experience that’s not bad. The general shape of the emulation is well maintained, and neither of the two half-data emulations go far off of the rails, as is all too common with this type of analysis.

So that’s how I’d analyze the data, at least to begin with. My conclusions?

Well, my first conclusion has nothing to do with the lakes. It has to do with Figure 2, which shows that there is nothing out of the ordinary happening to Barrow winter temperatures. So whatever you might want to blame the lake thawing on, it’s not the local temperature. It’s hasn’t much changed over almost a century, it just goes up for a while and then down for a while.

The second conclusion is that the changes in the lake thawing dates over the period of study are not “dramatic”. In fact, they are boringly mundane. The only thing “dramatic” is the press release, which is no surprise.

The third conclusion is that I wouldn’t trust my emulation of lake thawing all that far … the problem is that with  N=20, we have so little data that any conclusions and any emulations will be fraught with uncertainty. Heck, look at Figure 1 … up until a few years before the end of the data there was not even much trend. It’s just too short to conclude much of anything.

Next, I wouldn’t trust their “CLIMo Lake Ice Model” much further than I’d trust my emulation above. Again, the underlying problem is lack of data … but to that you have to add the unknown performance of the CLIMo model.

Finally, while the authors were restrained in their study, they cut loose in their quotes for the press release, viz:

“We’ve found that the thickness of the ice has decreased tremendously in response to climate warming in the region,” said lead author Cristina Surdu, a PhD student of Professor Claude Duguay in Waterloo’s Department of Geography and Environmental Management. “When we saw the actual numbers we were shocked at how dramatic the change has been. It’s basically more than a foot of ice by the end of winter.”

and

“Prior to starting our analysis, we were expecting to find a decline in ice thickness and grounded ice based on our examination of temperature and precipitation records of the past five decades from the Barrow meteorological station,” said Surdu, “At the end of the analysis, when looking at trend analysis results, we were stunned to observe such a dramatic ice decline during a period of only 20 years.”

I see nothing “stunning” or “dramatic” in their results at all. Overall, it’s quite ho-hum.

My warmest regards to all, it’s bucketing down rain here after a long period of drought, life is good.

w.

AS USUAL … if you disagree with me or anyone, please quote the exact words you disagree with, and give us your objection to those words. That way, we can all be clear exactly what it is you are objecting to.

DATA AND CODE: Primary sources given above, plus it’s all in my Excel spreadsheet, Barrow Lake Thawing …

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

64 Comments
Inline Feedbacks
View all comments
Clay Marley
February 6, 2014 7:32 pm

Willis Says

That’s a “user function” that I wrote a long while back to do a gaussian average. It takes as input a single data point and a value for the width of the gaussian filter (actually the (filter width -1)/2. It assumes that the list of data is in a single column, with some text above the data to prevent averaging before the start of the data.
About the only interesting thing about the function is the treatment of the end-points, which uses a method I invented and I discussed in Michael Mann, Smooth Operator.

Thanks much Willis, I’ll take a look. I remember that article well. Its the one that got me turned onto LOESS curves.

Rick
February 6, 2014 7:48 pm

Les H, I’ve looked at 1st and last date of a given year for ice bridge operation on this chart and have been unable to see any pattern or correlation one way or the other. Haven’t looked at the other links.
http://www.dot.gov.nt.ca/_live/pages/wpPages/Open_Close_Dates_Ice_Bridges.aspx

LesH
February 6, 2014 8:58 pm

The idea is that the total number of days that the ice bridge is open for traffic gives a measure of ice extent/integrity for any given year ( the crossings are opened and closed based on a range of factors including ice thicknes, density strength etc). Each crossing would be indexed to itself (its’ own average) and then the anomaly would give some indication of the extent of “unusual” ice melt or extension in any given year. Or perhaps graph the raw numbers of days for each year and do the statistical analysis for trends and anomalies. Such analysis is beyond me, but the initial graph of one crossing looks really intriguing.

LesH
February 6, 2014 9:25 pm

Just think of the potential title “Ice road Trucks weigh in on Climate change!”

Toto
February 6, 2014 11:24 pm

ferdberple says:

This is the problem behind the hockey stick and so many other studies in climate science. The researchers search and search for a statistical method that shows correlation, while ignoring the larger set of methods that say there is no correlation. From this they make the faulty conclusion that the correlation they find is real, when in fact the correlation is simply due to chance.

This is a classic case of “torturing the data” — keep repeating the question until it gives the answer you want to hear and only report that one. It’s not just a figure of speech.

Jim Happ
February 7, 2014 5:41 am

If N=20 is too low a number, just convert to dog years. Or imagine if our planet was in a lower or higher orbit how much little or more time a year might be.

Rick
February 7, 2014 7:07 am

LesH Here’s some fun with cherry picking.
With that in mind and looking at your 1983/84 to 2010/11 ice bridge chart take a look at Tsiighetchic, the last Mackenzie River crossing before the Mackenzie Delta town of Inuvik.
During the 1997/98 season, at the height of the global warming scare the ice bridge stayed open until May 17th, a date so late in the season it had nor has not been exceeded before or since. The chart doesn’t show spring 2013 numbers when a new record may have been set.

LesH
February 7, 2014 5:35 pm

Hi again
Rick and Willis, thanks for taking a look at that info for me. The reason I thought it might be a good metric is that the closing date of those crossings has to do with the spring break-up, which is not a measure of the temperature at the site of the bridge, but a factor of the water flow out of the drainage basin. In this case, the drainage basin is a little bit bigger than the entire state of Alaska.
A metric that would reflect melting on that scale would be useful in establishing general trends.
Again, thank you for you help. I thoroughly enjoy your comments, humor and your articles. Press on.
lh

Rick
February 7, 2014 9:49 pm

Good one Willis.
That was my chuckle of the day.

LesH
February 8, 2014 3:38 pm

At the risk of flogging a dead horse…
If I remember right the average opening on those crossings ranged about 130-140 days, which means a lot of the points are on the edge of, or over, a 2 sigma deviation from the norm. Two things come to mind.
First, is that this kind of behavior is not strictly random, No bell curves would come out of this. My mind wanders as to the cause of such an accentuated swings in the data. I wonder if there is something in the nature of flushing in the watershed; with cold winters accentuating the ice dam effect upstream and warm winters minimizing it? If so, is there a different statistical process for such data? Of course, if this Arctic data has such accentuated swings, what other Arctic data exhibits the same tendency? but the mind wanders…
The second thing that comes to mind is that while a statistically accurate line or curve may not be able to be draw through the data; it would seem obvious (eyeballing it) that the average of that data indicates only a short decline (perhaps 5 days?) in the overall length of the ‘open road’ season over the 28 years of the data. So while a very weak warming trend may inferred, it is certainly NOT the kind of dramatic change indicated by the original paper. While this data may not positively affirm a specific trend, it does lend weight to doubting the assertion of the original paper.
regards to your tireless crew!
lh

Rick
February 9, 2014 7:46 am

LesH
The study of climate and weather has taught us that torturing signals from disparate data points requires an open mind. I think that you and I should jointly apply for a grant to study this ice road thing.

LesH
February 10, 2014 12:27 am

A grant you say?? They would want us to get published!
To get published we would:
1) need to lack relevant degrees
2) have a predilection for speculation
3) have a dramatic media presence with no sense of humor or irony.
Hmmm.. speaking for myself, 1=2 out of three ain’t bad but given the feeding frenzy, I don’t think we would make it.
You will have to find another dancing partner!

ps. sarc intended (just in case you fared better than me on the check list!!)