GISS Arctic -vs- DMI Arctic: differences in method

We’ve all seen this graph below of Arctic Temperature above 80°N from DMI. But, there’s something surprising about how it is created.

In this guest post by Harold Ambler, he finds that DMI actually goes to the trouble of applying as many data sources as they can to their numerical weather prediction model input, not just extrapolate from the nearest ground based stations as GISS does.  – Anthony

Danish Meteorological Institute scientists measure temperature. GISS scientists are seldom pictured performing such menial tasks.

Guest post by Harold Ambler

As has been well covered by Steve Goddard on WUWT, the “interpretation” of Arctic conditions by NASA/GISS is based on astonishingly little data north of 80 degrees latitude, which is to say no data at all.

As the Danish Meteorological Institute (DMI) has been offered as a source of actual data and information, rather than imaginary data and imaginary information, and as the word “model” has been bandied around on WUWT as a problematic aspect of DMI’s temperature product, I thought now might be a good time to share an e-mail exchange I had several months ago with the DMI’s Gorm Dybkjær. Below is a lightly edited version of our exchange. Many of Dybkjær’s statements are very interesting.

Dear DMI:

I am an American journalist completing a book about climate change and have been studying your Arctic temperature graph for some time. The graph says that the data are obtained by the use of a model.

I wonder if you can tell me how many temperature stations the average represents, and why the word model is used. (I would anticipate the word ”model” to be a predictive computer analysis, as opposed to descriptive.)

Would it be possible to clear this up?

Thank you in advance.

Sincerely yours,

Harold Ambler

To which Dybkjær responded:

Dear Harold

Concerning your question about the number of in situ temperature observations (direct measurements) there is available in the Arctic – the brief answer is – there are not many! My guess is that the number of buoys in the Arctic Ocean that provide near-real-time temperature observations for e.g. numerical weather prediction (NWP) models are around 50. The number of land based weather stations on the rim of the Arctic Ocean are probably even less. You must contact WMO (world meteorological organization) for more accurate numbers.  So by dividing the area that the ‘mean temperature’ graph represents by 100 temperature observations, you will of course find that each observation must represent an enormous area. That is exactly why you want to use NWP models to estimate distributed temperatures in the Arctic.

The NWP models used for the ‘mean plus 80N temperature’-graph on are, as you mention, a predictive numerical model. However, before you let the model ‘go’ to do the weather forecast calculations, you must estimate the initial state of the atmosphere. The initial state of the atmosphere is the best guess, based of all observation you have available and the coupled physical constrains of the model. The approximately 100 in situ surface temperature observations is only a very limited part of ‘all available observations’ you feed into the model. You have measurements from airplanes, atmospheric profiling instruments mounted on balloons and then of course the far most valuable input to NWP models today – a huge amount of observations from satellite.

From these data sources all kinds of atmospheric variables are measured/estimated and assimilated into the NWP models. From a ‘bargain’ between the coupled model physics and all the applied observations the model calculates the best initial state of the entire atmosphere. That initial state – the model analysis – is the best guess of e.g. distributed surface temperatures in the Arctic you get.

Hope you can use this clarification.

Best Regards

Gorm /Center for Ocean and Ice, DMI

I found Dybkjær’s response helpful and also confusing. Below are follow-up questions I sent him paired with his responses:

Hi Gorm,

Thank you for your response.

I think I am understanding you somewhat and have a few follow-up questions:

1. Does DMI’s ‘mean plus 80N temperature’-graph use measurements from airplanes?

All available observations, including measurements from airplanes, are used by the models to calculate the best guess of the atmospheric condition. This ‘best guess’ (or ‘analysis’) is calculated 4 times per day of which the 00z and 12z are the basis for the ‘plus 80North’ temperature graph. I must recommend you to contact “European Centre for Medium-Range Weather Forecasts” ( for details on the amount of observation they use for any of their model analysis.

2. Do you use measurements from satelllites?

Dybkjær: A huge amount of satellite data are also used to produce the ‘best guess’… (see above)

3. Do you use measurements from balloons?

see above

4. Does the number of data-sources change on a daily basis?

Yes – but I do not believe this has a significant effect on the day to day quality. Contact the ECMWF!

5. Do you adjust for this?


6. If you do use the sources listed in 1-3, who provides you with the data?

At DMI we get most of our ground based measurements through the WMO – satellite data we either retrieve our selves or get them through various data networks. I guess the same is the case at ECMWF, who run the models used for the temperature graph we are talking about here – so for more details on this please contact ECMWF.

7. Some of the spikes in the record look extraordinarily sharp, and I had previously understood such moments to be cases where sub-polar air overran the Arctic basin. But I wonder if, to some extent, they represent the model over-reacting to a single spike in data from just a few sources? For instance, when I eyeball the temperatures around the Arctic basin, they don’t in every case appear to correspond to the spikes on your graph?

I believe – in general terms – that the spikes of the graph are realistic, but to discuss this further we have to look at specific cases. As I mentioned in an earlier mail, the ‘plus 80 North mean temperature’-values are the mean of all model grid points in a regular 0.5 degree grid – meaning that along each half degree parallel North of 82N, you have 720 temperature values! That means that the ‘plus 80 North mean temperature’ is strongly bias towards the temperatures in the very central arctic and therefore less affected by temperatures along the rim of the Arctic Ocean. Therefore – You can use all the plotted ‘plus 80 North mean temperature’ graphs to compare one year to another or the climate line and you should NOT compare the mean temperatures to a specific temperature measurement.

8. The word model is still confounding here: Basically, the graph represents initial conditions for you to run the model predictions. But the initial conditions are not generated by the model. They are generated by you and the staff at DMI, correct?

The initial conditions are generated by the model using state-of-the-art atmosphere physical knowledge.

Although our interchange left some questions unanswered, I had learned what I wanted to by this point: DMI’s data for the topmost portion of the globe, north of 80 degrees latitude, while a hodge-podge, and plagued with its own set of issues, was far, far more reality-based than the Arctic data published by NASA/GISS and, thus, the lesser of two evils. I will be in the Sierras and away from my computer for the next 10 days.


About Harold Ambler

I was obsessed with weather and climate as a young boy and have studied both ever since. I have English degrees from Dartmouth and Columbia and got my career in journalism at The New Yorker magazine, where I worked from 1993 to 1999. My work has appeared in The Wall Street Journal, The Huffington Post, The Atlantic Monthly online, Watts Up With That?, The Providence Journal, Rhode Island Monthly, Brown Alumni Monthly, and other publications.

Visit Harold’s website :Talking About the Weather

And hit the tip jar if so inclined. -Anthony

75 thoughts on “GISS Arctic -vs- DMI Arctic: differences in method

  1. Name of book, please.
    And I’d like to salute your courage for the post at the Puffington Host.

  2. 1200 KM smoothing vs. actual Arctic temperature measurements. It is unfortunate that our tax dollars fund GISS and NOAA when a published metric for the Arctic is superior.

  3. Huh? Using real observed temps in the Arctic to determine the temps? Unheard of!! Do you think if we asked pretty please, Mr. Dybkjær could share this new found methodology with Hansen? I think he’s on to something!

  4. Since Big Oil pumps mega millions into the sceptics activities, the real warmists are stuck with no funding. They must resort to arm chair science. It is warm inside and cold up there. Nobody would want to actually go to the arctic. It is cold and dangerous. We have models that work very well.

  5. Has anyone else noticed that, while there is great variability throughout the data range, the ‘actual’ measurement almost always seems to intersect with the average curve and the day where the average cross 273.15? Seems to be 6 out of 7 years that it does this.

  6. Why would NASA/GISS want real data that removes their justification for claiming that this is the warmest day/month/year/decade since the last ice age? As as US taxpayer, I am wondering if anyone from the Congress or the Executive branch will ever call for open, accurate collection and reporting of weather and climate information.

  7. This is fantastic and so nice to finally know how things are actually recorded. So why do we pay tax dollars to NOAA to give us estimated biased results, when the DMI can give us more accurate information??

  8. A significant question remains: If a “mssing M” (Metadata) is fouind where a -20.0 C is read as a +20.0 C, who cab caatch the error – if anyyone at all, and is that the cause for the rougher “winter and fall temp curves?

  9. MattN says:
    July 28, 2010 at 6:27 pm
    And THIS is why this is the #1 science blog….
    I will second that. The education you get at this site is amazing.

  10. Can we compare how well the GISS interpolation method agrees with the DMI data?
    It seems to me that it could possibly falsify the claims that they can accurately interpolate over 1200 km.

  11. Excellent sleuthwork, Harold.
    Note the DMI official’s comments, more than once, about DMI’s specific reliance on the ECMWF, which it seems, many meteorologists hint, is the most superior general circulation model in the world.
    Figures DMI is so realistic. They rely on a GCM whose head is usually not in the clouds…plus more direct observations.
    They seem to be more on track than our own GISS-based “GI**.”
    But that’s not saying much. Everybody is more on track than GI**.
    Norfolk, VA, USA

  12. Sounds like a good process, despite the lack of data. Of course, with 100 measurements for the region, it’s probably a higher density than most areas covered by GISS worldwide.
    I wonder if a observation coupled has ever been tried world-wide? I’d be curious how it differed from the major temperature indices……

  13. kim says:
    July 28, 2010 at 6:07 pm
    Name of book, please.

    Book is called Don’t Sell Your Coat and should be out on Kindle by October.

  14. Some of the spikes in the record look extraordinarily sharp…..
    The spikes are normal. They are from eddies. Richard Lindzen talks a little about that in this video

  15. It is a relief to know they are trying to use as much data as they can unlike GISS and some others that are trying to use as little data as they can.

  16. Ed Caryl says:
    July 28, 2010 at 7:59 pm
    Better yet, can we fire Hansen GISS and hire DMI?

  17. The Cheshire Sun grins;
    The oceans oscillate cool.
    Don’t sell your coat, please.

  18. It’s important to remember that DMI and GISS are doing two different things.
    The DMI is trying to establish current meteorological conditions, using a range of data sources irregularly distributed in space and time, as a starting point for weather forecasts. The GISS is trying to establish the difference in temperature between current conditions and those 100 or more years ago. There is no doubt that the DMI current estimates are more accurate than those of GISS but how do you compare these estimates with temperature values from a time when there no planes or satellites and no person had set foot on the North pole?
    Given the vast cost to the world of climate change, either in useless mitigation measures or climate induced disasters depending on who you believe, we spend remarkably little on trying to estimate the changes. Two things are needed:
    1. All climate stations currently used should be visited, photographed, and accurately located by GPS. All metadata (station history etc) should be retrieved and taken account of in the analysis.
    2. Data not currently used in the analysis should also be collected. This would enable the relationship between keys stations and grid square values to be more accurately estimated than at present. Another example might to use a combination of nearby rural stations with, say, 5 to 10 years record to estimate the urban heat island effect for a long-term station.
    It would not be easy and it would take time but the cost could be justified. In the UK it was announced that the average energy household bill is expected to increase by £ 300/year ($500/year) to combat climate change. This is equivalent to £ 6 billion ($9 billion) a year. To spend a few tens of millions to understand how the world’s climate has really been changing would seem to make good sense.

  19. Why can’t there be a grid of automatic winter stations set up each year and a flotilla of ship based stations in summer if it’s so darn end-of-the-world important? Canada had been completely photographed one 80 sq mile snapshot at a time with 60% overlap to get stereo, magnetically and electromagnetically surveyed by airplane,and geologically mapped on the ground largely by foot and canoe before there were satellites (I mapped several thousand sq miles of geology in Manitoba and Nigeria in the 50s and 60s). Now that was data gathering! No armchair computer games.

  20. Is the Arctic finally warming back up to the temperature it was 150 years ago?
    After 157 years the ice finally melted enough for explorers to find the remains of the HMS Investigator, a British ship that “almost” made it through the North West Passage in 1852 and 1853 but stuck in the ice in Mercy Bay and eventually sank there while looking to see if they would run across the remains of the Franklin expedition.
    So, does that mean it has finally gotten was warm as it was 10 years ago when they sailed in this area and got trapped in the ice. Or?

  21. Correction — as warm as it was 150 years ago (not 10 years ago, add the 5 in)

  22. GISS reports thermodynamically impossible temperatures at the North Pole. There is no way that the temperatures over the ice can average more than about 1 or 2C, because the heat required to melt the ice buffers the temperature.
    GISS fails basic physics when it comes to Arctic temperatures. There is not much reason to believe they do better anywhere else.

  23. Are you sure this Gorm Dybkjær is a real climate scientist? He seems keen to be helpful, and tries to be precise as he can, while admitting that there are some things that he just doesn’t know.

  24. We don’t get much sea ice around New Zealand, so I may be off track here.
    It is my understanding that sea ice melts at a temperature of aproximately -1.8 C. Looking at the picture of the North Pole bouy there is obviously a layer of snow sitting on top of the sea ice. I assume that this snow is entirely made up of fresh water and will melt at 0.0C. Now one thing I do know is that snow doesn’t melt until the entire snow column from surface to hard ground is at 0.0C.
    However because this snow is sitting on sea ice it can never reach that temperature until the sea ice beneath it is already above its own melt point. It therefore seems logical that a mixing layer will form at the junction of the snow and sea ice that will comprise of ice that is less saline than sea ice, and more saline than pure snow, and will have a melting point somewhere between -1.8C and 0.0C. In turn this will prevent the surface of the snow ever going above 0.0C until the ice below it has melted.
    Similarly any air at a higher temperature than 0.0 C passing over such a cold surface will pass its heat content to that surface. Unless that air can somehow attain a laminar flow while passing over the surface (extremely unlikely), then the turbulant nature of air movement will also ensure that the air temperature immediately above the ice will also be at 0.0C or lower.
    Although slightly warmer temperatures will be possible at higher altitudes, or right at the edge of the ice, the layer of air close to the surface (say at the standard height of a Stevenson screen) is limited to 0.0C or lower. Hence any modelled, interpolated, extrapolated, guessed, estimated or wished for temperature over the ice is also limited to 0.0C or lower and any claims that show a warmer temperature must be dismissed.

  25. Aren’t these Scandinavians narrowminded little people, gathering and collecting like ants all the information there is just to be as precise as possible? Isn’t the easygoing American way of life, broadminded and generous, using no information at all to get the desired, completely wrong result, so much better?

  26. …” My guess is that the number of buoys in the Arctic Ocean that provide near-real-time temperature observations for e.g. numerical weather prediction (NWP) models are around 50. ”
    And my guess is that the data from these buoys is less than accurate. Is there a list?

  27. Binny says:
    July 28, 2010 at 10:48 pm
    Are you sure this Gorm Dybkjær is a real climate scientist? He seems keen to be helpful, and tries to be precise as he can, while admitting that there are some things that he just doesn’t know.
    You mean Real Climate scientist and NO they will never be that good 🙂

  28. Sera says:
    July 28, 2010 at 11:37 pm
    …” My guess is that the number of buoys in the Arctic Ocean that provide near-real-time temperature observations for e.g. numerical weather prediction (NWP) models are around 50. ”
    And my guess is that the data from these buoys is less than accurate. Is there a list?
    Less accurate than what? Hansens crystal ball. They have got to be more real than virtual thermometers and more accurate than 1200km grids.

  29. I only found 27 buoys inside 80°N:
    25593, 25594, 25595, 25624, 25626, 25629, 26558, 26559, 47532, 47533, 47613, 48533, 48534, 48548, 48555, 48558, 48595, 48596, 48621, 48647, 48672, 48673, 48683, 48684, 48691, 65901, 65902.
    All can be pulled at this site , just enter the buoy number. You can then export data to Excel. This is raw data, for entertainment purposes only.

  30. Good to see that the DMI model is at least partly reality based, although because temperature is the result of many climate mechanisms driven by deterministic chaos, even their measure will be subject to much inaccuracy. The data granularity is far too coarse to set realistic initial conditions and the algorithms they use are entrained to find the variation they expect. Models are a poor method to use for understanding our weather/climate. Money needs to be invested in improving the number/quality of measuring instruments and the focus shifted from temperature change to system energy movement.

  31. Well, the coverage of DMI-s numerical weather prediction models can be seen here:
    They do not seem to provide public information about observations. NWP models still have a lot of problems over snow and ice, but it is probably better than simple interpolation anyway and DMI definitely has experience in this field. I would suggest to look at ECMWF-s analysis data over Arctic, anyone from European met research center should have access to this.

  32. @stevengoddard
    “GISS reports thermodynamically impossible temperatures at the North Pole. There is no way that the temperatures over the ice can average more than about 1 or 2C, because the heat required to melt the ice buffers the temperature.”
    Surely the conclusion from that is that *surface* temperature in sea ice areas is a useless number? It will always be within spitting distance of zero degrees C during the melt season, precisely because a mixture of ice and water will hold at zero degrees until all the ice is gone, whether you put it in the fridge or in the oven. This is the reason for the flat section in the middle of the DMI seasonal graph
    If you want to find *out* whether it’s in the fridge or the oven (i.e. how quickly it’s going to melt), you need to take the temperature some distance away from the ice surface. Short of somehow putting sensors on tall stilts all over the Arctic basin, the most reasonable way to do this is to extrapolate from the nearest land station…
    In other words: is it simply that GISS is not trying to measure the *surface* temperature in sea ice regions, because surface temperature is a meaningless measure in those regions?

  33. Ron Manley says:
    July 28, 2010 at 8:37 pm
    … The GISS is trying to establish the difference in temperature between current conditions and those 100 or more years ago.

    Which is surely one example of the futility of climate science (as currently practised). We don’t know what we don’t know. It appears that DMI is giving us an accurate reading of the state of the Arctic as best they can given the lack of fixed reporting points. GISS is not making any attempt to do anything of the sort; they are simply taking the output of the reporting points and then “guessing” what this means for the rest if the Arctic.
    And since Hansen and his pals have already bought into the idea of AGW the opportunity for bias, even unconscious and even giving them all the benefit of the doubt that we can muster, is too great for the outputs to be taken on trust.

  34. Followup: A quick Google shows I’m on the right lines here.
    “Areas covered occasionally by sea ice are masked using a time-independent mask.” <- presumably this also applies to areas covered permanently with sea ice(?)
    So why do they mask surface data from areas covered with sea ice? That seems to be covered by
    “ice was present, […] making water temperatures a bad proxy for air temperatures”
    I guess you could quibble that GISS should be more explicit about why they extrapolate from land data to fill in over sea ice areas, but if I can find it with 5 minutes’ Googling, I imagine someone more knowledgeable in the field would be aware of it as a matter of course.

  35. Sera says:
    July 29, 2010 at 1:05 am
    This buoy has current air temps at -26C°. Four hours ago it was +26C°.
    Ouch. I wonder if there is some screening of this data for missing Ms before it is entered into the model? Hard to do that automatically when temperatures are closer to zero, though…

  36. Sera says:
    July 29, 2010 at 12:55 am
    “This is raw data, for entertainment purposes only.”
    You forgot to say “Sarc off” 😉 (I hope!)
    Made me laugh anyway…
    If only the “Gang” could use the expression “the best guess of”!

  37. Sera says: July 29, 2010 at 1:05 am

    This buoy has current air temps at -26C°. Four hours ago it was +26C°.

    Looking at the hourly data for the last 10 days, about 19 out of 226 air temperature readings have a positive number. Ignoring the signs, the values range between 26.0 and 26.5. So it looks as if the old thread GISS & METAR – dial “M” for missing minus signs: it’s worse than we thought applies here too. You would hope DMI would catch such things

  38. Regarding:
    “Sera says:
    July 29, 2010 at 1:05 am
    @stephen richards says:
    July 29, 2010 at 12:54 am
    This buoy has current air temps at -26C°. Four hours ago it was +26C°.
    Hansens cryball beats the buoy.”
    Oh, come on! Obviously that thermometer is broken. It has had the same reading for days, and only occationally drops the minus sign.
    Of course, Hansen would exclude all the data with the minus sign, and pounce on the readings without the minus sign.
    Did you mean to say, “Hansen’s cryball?” I sort of like that.

  39. Thanks Binny, Alexej Buergin & others for showing me the difference between a scientist and a High Priest!

  40. Head shakers!
    I’m getting to think that climate should be done by the square foot and not by the global standard.

  41. But…But…the polar anomalies are HIGHLY correlated! They HAVE to be right – we said so!! Pay NO attention to the DMI…
    [sigh] meanwhile in Vostok, Antarctica…
    Vostok, Antarctica (Airport)
    Low Drifting Snow
    -100 F
    They are anticipating wind chills in the -130 F range this weekend, but I would imagine that wind chill loses its meaning at those mind-numbingly low temperatures…

  42. It is refreshing to see a scientist actually use the data that is available, and acknowledge that sometimes there is not a lot of data. I will put a lot more faith in anything that comes out of the Danish office in the future.

  43. This is extremely important information. It shows the cavalier attitude of GISS towards science and the cavalier attitude of most of our journalists towards news.

  44. Frank K – ‘They are anticipating wind chills in the -130 F range this weekend, but I would imagine that wind chill loses its meaning at those mind-numbingly low temperatures…’
    I’m guessing +1C probably wll not make much diffrence then?, sorry i forget is the message this week that global warming is global or local?

  45. These warmists wonder why we don’t believe their “It’s warming fastest at the poles” nonsens. Remember – never believe your lying eyes!

    “The CERES data published in the August BAMS 09 supplement on 2008
    shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.” –
    Kevin Trenberth – ….it is a travesty….

    Have you ever wondered why Warmists are NOT delighted with signs of cooling? The answer is – agenda which nature sticks its big finger at time and again.

  46. Thanks very much for all the detail about the DMI’s information gathering and analysis methods.
    I’m curious about the statement that the DMI index is the “mean of all model grid points in a regular 0.5 degree grid.” As Dr. Dybkjaer indicates, because of the convergence of longitude lines near the pole, the density of gridpoints increases as one nears the pole, and this biases the index towards the temperature near the pole, as opposed to representing a true regional average.
    Perhaps you could ask a followup question concerning the reason why DMI prefers to form its index using this method, as opposed to an area-weighted average.

  47. I get the “trick” of GISS possibly claiming that surface measurements would be useless becasue they will always be around 0 over ice and that they need measurements at some elevation above the ice. Of course taking a temperature from hundreds of miles away is a meaningless proxy that only a climate scientist would think is workable. Every high school student taking a science class would get failed for any experiment that was based on such a shoddy manipulation of data. Above all else this proves without a shadow of a doubt that the GISS is not a scientific organization.

  48. “It seems to me that it could possibly falsify the claims that they can accurately interpolate over 1200 km.””
    Not really. There is no claim to falsify. There is no claim of accuracy. What there is is the following. An observation that in northern latitudes the correlation between stations falls below 50% ( on average) when you exceed 1200km.
    So: if you have site A located 1000km from Site B. you will find this. Looking at the temperatures over time, when sit A goes up, site B tends to go up. When it goes down, site B goes down. They are correlated. Now comes the question:
    Can I draw a box ( a grid cell on the globe) around site A and Site B, and average A and B to come up with an estimate for that BOX. ? Well, IN GISS the answer is yes.
    There is No claim as to the Accuracy of this decision ( the error due to gridding) There is no reason to believe that this Biases the estimate. Some of us quibble with the figure of cutting the correlation metric off at 50%, but folks can test the effect of varying that parameter.
    The biggest issue with assimilating the DMI information into a global assesment is
    1. Length of the record.
    2. Homogeniety ( changing data sources)

  49. Shevva says:
    July 29, 2010 at 6:41 am
    “Im guessing +1C probably wll not make much diffrence then?, sorry i forget is the message this week that global warming is global or local?”
    There is no message here…just observing. Besides, as Steve Goddard pointed out, apparently some scientists believe Vostok is melting…
    By the way, Vostok has now warmed to -99F…

  50. Steven (July 29, 2010 at 9:40 am)
    “What there is is the following. An observation that in northern latitudes the correlation between stations falls below 50% ( on average) when you exceed 1200km.”
    But I’m guessing that the measurement values which produced the above-cited correlation-vs.-distance rule, were only from land-based stations. It seems to me that the temperature over ice-covered areas will tend to go not far above 0 deg C, so a correlation between a land-based station and an icy sea area would likely not hold during summer months. As to the sea surface temperature, I don’t think that would necessarily be well approximated by extending the land anomaly either; I’d expect sea temperatures to change seasonally at a slower rate than land, and for its anomalies to be lower as well.

  51. The identifying mark of real data is the warts. When you see data with jumps and other flaws, that means it’s real. It may take work, even genius, to deal with the flaws but it’s worth it. The DMI data has the kind of warts I look for.
    Phony (or overly “massaged”) data always looks clean (no warts — nothing to arouse suspicion). It’s easy on the eyes and easy on the brain. It tells a simple story because it is a simple story.
    All scientists struggle with how much to simplify/clarify their data. There is almost no limit to the data massaging methods available. Discarding outliers is the classic unsettling method. Scientists lose real sleep whenever they do this. Many of the errors produced by the AGW alarmists are subconscious.
    GISS uses 1200 km smoothing because without it, their data shows large voids and they are unwilling to let those warts show. They could fill in with the DMI data but that would mean mixing data sets and that has its own issues. Their choice of 1200km smoothing is weak but understandable.
    Thank God for the internet and for Anthony. This is where the real peer review is occurring. To bad the AGW alarmists don’t “play well with others.”

  52. Harold Ambler,
    Just to clarify a few points here. The data that DMI plot are the initial conditions from the ECMWF (European Centre for Medium Range Forecasting) weather forecasting model. i.e. they are the t+0h data used to start the ECMWF weather forecast (widely acknowledged to be the most accurate in the world). DMI have area-averaged all of the grid points north of 80N and plotted that. i.e. the data they use comes from ECMWF, but DMI produce the graph.
    The initial conditions of the ECMWF forecast model (and all weather forecast models) are arrived at via a process of ‘data assimilation’.
    This is a complicated mathematical process whereby observations from many sources (surface stations, buoys, ships, radiosondes, satellites etc) are combined (using estimates of observation error for each instrument) with a previous very short range weather forecast (e.g. a 3 hour forecast made 3 hours ago) using estimates of forecast error, to arrive at a best estimate as to the current state of the atmosphere.
    This is done for temperature, pressure, humidity, winds, clouds etc and the result must be physically consistent, i.e. the winds should be consistent with the pressure field (approx geostrophic winds etc). The technique currently used is “4-Dimensional Variational Assimilation” (“4D VAR”). Essentially it is a very intricate least squares fit.
    For those interested, there are some advanced tutorials here:
    And the full system currently used is documented here (but it is fairly tough reading for all but mathematicians):
    Why combine real observations with a model? Observations are patchy, but the model needs values globally and through the whole depth of the atmosphere in order to run at all. Data holes can be filled in using a data from short-range forecast (which has values everywhere). The errors in a 3-hour old weather forecast are generally very small, but if initialized with poor starting conditions they will inevitably contain errors. The model can act to transport information from data rich regions to data sparse regions. e.g. if air mass characteristics are well observed when over a continent, then when the air moves over a data sparse region, the model still has a pretty good grasp of the air characteristics and how they will evolve in the new region.
    Analyses reached via data assimilation are generally regarded as the best estimate we have for the state of the atmosphere as they combine data from many sources in an intelligent way. However, they are not perfect, and while large scale upper air features (jet streams etc) are very well captured, details in the lowest 2m layer adjacent to the surface may have larger errors.
    The other line plotted, labelled ‘ERA-40’ is the climatology derived from the ECMWF ReAnalysis project. A reanalysis is when you go back and perform data assimilation and short range weather forecasts for archived observation data going back many years. e.g. the old observations from June 23rd, 1960 are still archived, and ECMWF go back and use their latest state of the art assimilation and forecasting system to produce analyses for that day. ERA-40 did this for an entire 40 year period building up daily weather charts (produced by combining all available observations using state of the art techniques). You can then calculate averages for this period and use it as a reference climatology (as was done in the graph at the top of the page).
    Hope that helps.

  53. I was obsessed with weather and climate as a young boy and have studied both ever since. I have English degrees
    I was obsessed with English literature and poetry as a young boy, but I now hold degrees in atmospheric science. 😀
    So what does DMI say about AGW and polar amplification?

  54. Bemused, thanks for this statement: details in the lowest 2m layer adjacent to the surface may have larger errors.
    Indeed, reanalysis datasets have many problems with surface variables, and the surface temperature from ERA-40 (ECMWF) as well as NCEP/NCAR and JRA-25 reanalysis products suffer from large errors. There are a large numbers of papers that have been written on the accuracy of these reanalysis datasets. The folks at DMI understand the limitations of the data. It would be good when data from institutions such as DMI is shown on WUWT that a caveat is also given about the data accuracy so that readers can wisely interpret the significance of the results.

  55. Julienne Stroeve says:
    July 29, 2010 at 12:42 pm
    Are you saying that short term re-analyses are full of errors but it’s OK to rely on 100 year projections from GCM’s? How wisely should we interpret the results of the latter?

  56. I believe climate models are useful for trying to understand how different processes impact the climate system, trying to model feedbacks, etc. but I wouldn’t expect their surface temperature record the last 100 years to be entirely accurate. I look at them as more qualitative rather than quantitative estimates.
    For example, my comparison between GCM modeled Arctic sea ice extent and the actual observations shows that while the models qualitatively get the decline correct, none of them are able to reproduce how quickly the ice has declined during the last 50 years (

  57. The data before 2002, that goes into calculating the green mean line, comes from the ERA40 dataset. When you look at the lines from years before 2002, summer temperatures are remarkably close to that green line – there is much less variability.
    This suggests to me that the ERA40 dataset is perhaps less precise than the current data and the green mean line (which just reflects slight divergences from 0C) is perhaps not realistic. I assume the ERA40 data was based just on WMO data which may not have been as broadly sourced as current data. Bemused seems to know more than me – (though I would add that I understand that the ECMWF model is only “better” because it runs later, and therefore has access to more observations – something you can’t afford to do if you need to provide short-range forecasts.)

  58. Julienne Stroeve says:
    July 29, 2010 at 7:48 pm
    Thanks for the link to your paper. I have read the paper (not very thoroughly) and hope that your conclusions will be taken to heart by the modellers. It is sad to see just how far off (and ludicrously similar -except HadCM3) the models are.
    I’m no expert but in my view two critical factors appear to be missing from the models. The first is a physical model of ice drift caused by current/wind which leads to expulsion of the ice at the periphery of the arctic. The second relates to the relative importance of melting from below. This appears not be a factor in the models whereas in places like the Barents Sea (and possibly the Laptev Sea (from warm (relative) Siberian river water input)) it may be the dominant factor in ice melting. I suspect that warm (relative) water flowing under ice at low speeds removes much more ice than a faster warm wind above the ice.

Comments are closed.