GISS Polar Interpolation

By Steve Goddard

There has been an active discussion going on about the validity of GISS interpolations. This post compares GISS Arctic interpolation vs. DMI measured/modeled data.

All data uses a baseline of 1958-2002.

The first map shows GISS June 2010 anomalies smoothed to 1200 km. The green line marks 80N latitude. Note that GISS shows essentially the entire region north of 80N up to four degrees above normal.

The next map is the same, but with 250 km smoothing. As you can see, GISS has little or no data north of 80N.

Now let’s compare the GISS 1200 km interpolation with the DMI data for June 2010.

Daily mean temperatures for the Arctic area north of the 80th northern parallel, plotted with daily climate values calculated from the period 1958-2002.

http://ocean.dmi.dk/arctic/meant80n.uk.php

DMI shows essentially the entire month of June below the 1958-2002 mean. GISS shows it far above the the 1958-2002 mean. Yet GISS has no data north of 80N.

Conclusion : GISS Arctic interpolations are way off the mark. If they report a record global temperature by 0.01 degrees this year, this ↑↑↑↑↑↑↑ is why.

——————————————————————

Straight from the horse’s mouth.

the 12-month running mean global temperature in the GISS analysis has reached a new record in 2010…. GISS analysis yields 2005 as the warmest calendar year, while
the HadCRUT analysis has 1998 as the warmest year. The main factor is our inclusion of estimated temperature change for the Arctic region.

- James Hansen

In other words, the GISS record high is based on incorrect, fabricated data. Why did Hansen apparently choose to ignore the DMI data when “estimating” Arctic temperatures? GISS Arctic anomalies are high by as much as 4 degrees, and yet he claims a global record measured in hundredths of a degree. As Penn and Teller would say …. well I guess I can’t say that here.

155 thoughts on “GISS Polar Interpolation

  1. Hansen does it because those he feeds this information to and who want it to say warm, aren’t going to don their parkas and mittens and check it out for themselves. It is my opinion that he ignores conflicting evidence because his groupies don’t want conflicting evidence. He is the consummate yes man, which is why he is so attracted to protests. Hansen thinks he is leading, when actually he is the ultimate follower.

  2. Sorry this is off topic but i thought that some people here might appreciate an article that a gentleman, Brendan O’Neill, has wrote about the Afghan Wikileaks and a comment he made in it…

    “Truth becomes, not something we find out through critical study and investigation, but something we are handed by external forces … this is Truth as a religious-style revelation rather than Truth as the endpoint of thought, interrogation, question-asking, analysis. In reality, it is only through actively engaging with the world and its problems, through gathering facts and objectively analysing and organising them, that we can arrive at any Truth worth its name,”

    Original article here:- http://www.spiked-online.com/index.php/site/article/9348/

    Just thought the quote was quite near the bullseye in more ways than one.

  3. Lack of data is not a problem. Extrapolate favorable data.
    When people get caught with dishonesty, they usually attack, call names and try to punish the accuser.
    The scientific method call for observed. The warmist science calls for wishfull thinking.

  4. As Elmer Fudd aims his shotgun, “oh wooky there is anover wabbit!”. Sure glad my thermostat at home works better than the imaginary ones GISS has above 80N.
    No insult intended for Elmer Fudd. Just a tad of fun poked at Mr. Hansen.

    Bill Derryberry

  5. Very interesting. I don’t trust GISS. However I expect DMI will now come under attack. How do they come up with their numbers?

    On their site they explain:

    “Calculation of the Arctic Mean Temperature
    The daily mean temperature of the Arctic area north of the 80th northern parallel is estimated from the average of the 00z and 12z analysis for all model grid points inside that area. The ERA40 reanalysis data set from ECMWF, has been applied to calculate daily mean temperatures for the period from 1958 to 2002, from 2002 to 2006 data from the global NWP model T511 is used and from 2006 to present the T799 model data are used.

    The ERA40 reanalysis data, has been applied to calculation of daily climate values that are plotted along with the daily analysis values in all plots. The data used to determine climate values is the full ERA40 data set, from 1958 to 2002.”

    It would be interesting to know how many “grid points” they have and use.

  6. About a week ago I wrote an article for our local paper about the inconsistencies with the June temperature data and I was besieged with letters saying I was wrong or a quack. I sent the arctic temperature set, plus other data quirks, as an example of the heavily “massaged” data used to come up with the “record warmth”. That in turn did silenced most of my critics, so Steve, glad to read your post and analysis.

  7. What is with the pie slice of blue above 80 N? Am I colorblind or does that not match anything on the color bar? If that is not a plotting issue, it’s nice to see they have no gradient between a positive 2-4F anomaly and a large negative anomaly. Nope, no interpolation artifacts….evar.

  8. why don’t they just ignore all actual temperature measurements and model the entire planet ? Are all these guys Art History majors ?

  9. Steve Goddard: “In other words, the GISS record high is based on incorrect, fabricated data. . .”

    Is it not time for a Congressional investigation into such revelations of fraudulent, agenda-driven science at taxpayer-funded agencies?

    Maybe if the Republicans can win back at least one of the two houses of Congress, we can find a champion willing to hire expert, impartial staff and hold public hearings on television. It’s time the American people were given the opportunity to see how the wool has been pulled over their eyes by their own employees.

    /Mr Lynn

  10. In a sense you don’t need to DMI data, it’s just a distraction. Your 2 images of real data sites says it all. Lies, more lies, and more lies. The Obama administration is so totally desperate to kill the US economy that it will do almost anything to maintain the current AGW understanding and that is realy stupid and unnecessary.

  11. I was actually wondering last night how NOAA can get away with making up data for the Artic regions, when other sources have it below normal. This article just explained why, since NOAA estimates what they don’t know, in order to create a headline favoring warming
    NOAA has been claiming this is the warmest June, May, April, and I think March on record, but looking at objective satellites 1998 was warmer then this year in those months.
    NOAA seems to be desperate to keep everything warming and creating headlines that cause fear into the public, instead of reporting the truth and what’s actually happening.

  12. Steve,
    The link to GISS say’s June 2012 instead of 2010.
    Unless they are making these maps 2 years ahead of time.

  13. quote: The first map shows GISS June 2012 anomalies smoothed to 1200 km.

    Should change the link name to June 2010. At first I thought – OH NO, now they are extrapolating time as well as temperature!
    But of course they are, with their projections that at this rate, the ice caps will melt by 2150 or whenever, etc, etc…

    Keep after them, Steve

  14. There was a minor typo in the text “GISS June 2012 anomalies smoothed to 1200 km”. It should be 2010 I guess :)

    Kevin G: The grey area color is for missing data, i.e., there is no matching measurements withing 1200 km for that pie slice.

  15. Thank you Steve Goddard, I gather that if you published these results and continued your study in much more detail in a paper to be peer reviewed you may be on the way to falsifying Hansen and Lebedeff’s surely to be infamous 1987 paper, “Global Trends of Measured Surface Air Temperature”, allowing “lies, damned lies and statistics” to make it’s way into the public policy decisions that spend tax payers hard earned money.

    Now what to do about Hansen et. al. at Nasa GISS who continue to make use of fabrication of data in their allegedly scientific profession? I suppose it is one step at a time. I wonder if Hansen et. al. will now have to adapt to a new method of fabrication of their data now that their methods have been falsified by being shown to be biased towards their pet alarmist hypothesis by as much as 4 degrees! That’s a LOT of bias. How long do you give Hansen et. al. at Nasa GISS to come clean and attempt to save their scientific careers from social and legal sanctions? Hmmm..

  16. We need to get this to Wikileaks (sp?), which apparently is the only website the MSM looks at these days.

  17. MODERATOR:

    Please change “GISS June 2012 anomalies smoothed to 1200 km” to ” GISS June 2010 anomalies smoothed to 1200 km”

    Thx much

    [reply] Done.RT-mod

  18. Er – you do realize that the DMI numbers are also based on interpolated model data right? They’re not raw numbers from a single measurement – thats why its called a reanalysis. Read this to understand where the ERA40 values come from – http://www.mad.zmaw.de/uploads/media/e40Overview.pdf.

    REPLY:
    Oh yes we know that. But the reanalysis data doesn’t seem to have the same sort of problems that GISS has. How can two different techniques show significantly different results? That’s the point. – Anthony

  19. Steven, did you lose all of your “extrapolation is OK” friends? I was rather looking forward to reading another great discussion on how numbers pulled from one’s posterior is preferable to actually reading a thermometer.

  20. Anthony: Why did Hansen apparently choose to ignore the DMI data when “estimating” Arctic temperatures?
    .
    From the DMI website:

    The daily mean temperature of the Arctic area north of the 80th northern parallel is estimated from the average of the 00z and 12z analysis for all model grid points inside that area. The ERA40 reanalysis data set from ECMWF, has been applied to calculate daily mean temperatures for the period from 1958 to 2002, from 2002 to 2006 data from the global NWP model T511 is used and from 2006 to present the T799 model data are used.

    (emphasis mine)

    http://ocean.dmi.dk/arctic/meant80n.uk.php

    So are you, Anthony, arguing for the inclusion of modeled data – as opposed to instrumental data – in global gridded anomaly products?

  21. So, in the absense of data, we just make $#!t up. And then base policy off of it?

    Awesome…

  22. So, what you are saying is that GISS is bad in comparison to DMI because GISS interpolates. Mmmm, because, from your own link:

    “Calculation of the Arctic Mean Temperature
    The daily mean temperature of the Arctic area north of the 80th northern parallel is estimated from the average of the 00z and 12z analysis for all model grid points inside that area. ”

    In other words, you jump all over GISS for using modeling to fill in the blanks, when in fact, the one you favor is also using modeling to fill in the blanks. So, what is your criteria, whichever model shows lowers temperatures is right in your book?

    OK, smart guy, if the warming is only a result of the filling in of unknowns above 80 degrees, knock off both poles above 80, do the calculations yourself, and then tell us if it is getting warmer or not. If you are going to criticize a result, please post your own numbers as an alternate and explain why it is more accurate. Please show the difference and explain why it is significant.

  23. What would be even more funny, if the GISS lit the Arctic on fire for July, when according to DMI, it appears to be one of the coldest July’s there! :)

  24. Mr Lynn says:
    July 28, 2010 at 7:03 am

    I would think so. America is in desperate need of science that can be depended on.

  25. Steve,

    Is it be possible to show their global depiction with all the interpolated/extrapolated areas removed (show in in Grey)? Say using a radius of 20km for the real data? I used both words because NCDC uses “interpolation” while GISS uses “extrapolation” in their product descriptions.

    This might have been done before but I’d love to see it.

    Keep up the good work!

  26. The GISS interpolations (as they call it) are misrepresentations.
    I call this type of data artificial. i.e. – it may as well be considered noise.
    GISS has no Arctic signal.

  27. BTW, regarding:
    “If they report a record global temperature by 0.01 degrees this year, this ↑↑↑↑↑↑↑ is why.”

    What makes you think they use 1200 km smoothed data when the calculate the temperature?

    What makes you think any one year matters a lot when calculating the trend? It’s the media hyping up the 0.03 or whatever record; the scientists are just reporting the latest numbers.

    What matters is that the earth is still hot, and hasn’t cooled any, despite the fact that we’ve been getting less energy from the sun for some years.

  28. Are we all still keeping track of when Al Gore predicted that the Summer ice at the North pole would all be gone?

    He predicted that it would be in 5 years. That prediction was made back in 2008. So keep that prediction on ice until late summer 2013.

    Fingers crossed for record levels of ice in summer 2013.

  29. Hansen is a buffoon, his once-exemplary GISS/NASA has become [as Feynman put it] a pseudo-scientific Cargo Cult immune to fact, willfully polluting everything it touches. As astro-atmospheric forces hasten towards a 70-year “dead sun’ Maunder Minimum similar to that of 1645 – 1715, Big Jim’s Green Gang of Briffa, Jones, Mann, Trenberth et al. with pilot-fish Romm and Schmidt looks ever more profoundly stupid. Just who do these coprophagic slugs think they are kidding?

  30. Should we be using the word “interpolate” here (meaning determining a value between two data points, or are the values they derive actually “extrapolated” (meaning extending a trend beyond data points into open-ended space)? Because the two are very different. I don’t have that much problem with interpolated data, but please, extrapolated values are generally used for exaggerating a trend to support an agenda. As an engineer and scientist, I’ve seldom seen a situation where extrapolation has any value whatsoever; it is a big red flag!

  31. I notice that there are in fact a handful of sites in GISTEMP, when you take the most complete list of stations that they have, that are above 80degree North. Those in Canada (e.g. Eureka NWT) do show 2Celsius per century warming. The 2-4 Celsius warming division (i.e. red blob) rather flatters those sites of course, since they probably only just sneak into that division. Then they have extrapolated these sites some 1000km North of course. So, they manage to give the impression that the Arctic is hotter than a furnace based on a handful of sites with slightly higher readings today then 50 years ago (and forgetting that other sites in the same climate region, and some sites quiote near to Eureka NWT like Alert NWT, don’t show similar warming).

  32. “How can two different techniques show significantly different results?” – AW

    Get a clue. Different instruments, different algorithms, and you expect identical results? In what way are they significantly different, other than one is a global average and the other only covers the arctic?

    It is the trends that matter, and trends across all the data sets are toward higher temps.

    And why does Goddard pick June?
    Here

    http://data.giss.nasa.gov/gistemp/maps/

    Pick any time period you want and see for yourself if Goddard’s conclusions hold for other months.

  33. Ron Broberg,

    Any calculation of average surface temperature over a geographic region involves some sort of modeling. Your argument is a straw man.

    The point is that DMI data uses actual buoy data north of 80N.

  34. Steve,

    Please point me to where DMI indicates they have continuous buoy measurements above 80N. The point still stands – you’re just comparing two different interpolations in an area with sparse to no data – one from GISS, and one from DMI using model reanalysis to interpolate global temperatures.

  35. @Ron Broberg

    So, you have a problem wit what dmi does but not a problem with replacing unknown high latitude temps with known low latitude temps?

    Hopefully not, because of intellectual honesty and such…

  36. Ron Broberg says:
    July 28, 2010 at 7:57 am
    Anthony: Why did Hansen apparently choose to ignore the DMI data when “estimating” Arctic temperatures?
    .
    From the DMI website:

    The daily mean temperature of the Arctic area north of the 80th northern parallel is estimated from the average of the 00z and 12z analysis for all model grid points inside that area. The ERA40 reanalysis data set from ECMWF, has been applied to calculate daily mean temperatures for the period from 1958 to 2002, from 2002 to 2006 data from the global NWP model T511 is used and from 2006 to present the T799 model data are used.

    (emphasis mine)

    http://ocean.dmi.dk/arctic/meant80n.uk.php

    So are you, Anthony, arguing for the inclusion of modeled data – as opposed to instrumental data – in global gridded anomaly products?
    —————————————————————–
    I haven’t seen anywhere where that argument has been made. The argument being made is that GISS produces a product that shows dramatic warming in areas they have little or no data, while a competing product that has more data points shows a very different trend. Even this would be minor, except that GISS explicitly claims that its superior modelling of the Arctic is the driver behind thier products showing much more temperature increase than others… when the only other product in competition has more data and different conclusions.

    You’ve created a strawman to argue against.

  37. Chris G says:
    July 28, 2010 at 8:13 am
    BTW, regarding:
    “If they report a record global temperature by 0.01 degrees this year, this ↑↑↑↑↑↑↑ is why.”

    What makes you think they use 1200 km smoothed data when the calculate the temperature?

    What makes you think any one year matters a lot when calculating the trend? It’s the media hyping up the 0.03 or whatever record; the scientists are just reporting the latest numbers.

    What matters is that the earth is still hot, and hasn’t cooled any, despite the fact that we’ve been getting less energy from the sun for some years.
    —————————————————————
    You have a number of strawmen in there. Just to address the last, but where did anyone claim that the globe hasn’t warmed? I do believe that the discussion is centered around ‘how much has the globe warmed.’

  38. One metric (DMI) uses air temperature over the ice, while the other metric (GISS) uses SST under the ice.

    Or so I’ve been told.

    Does DMI produce a observational global temperature product from their weather MODEL?

  39. If you don’t like the Arctic interpolation in GISS, then use CRU. This is the reason for the slight difference between the two. CRU just leaves the Arctic blank, along with any other empty grid cells. Given that the stations that ring the Arctic indeed are warming faster than the rest of the world, that probably leaves CRU trending too low.

    Don’t the satellites also show warming over the Arctic that’s faster than the rest of the world, to the extent the satellites can cover it?

    As for the reanalysis product: the ability of those to accurately describe long-term trends in the polar regions has also been questioned. After all, they also are working from sparse data, which is then fed into a model. I don’t know anything about this particular reanalysis though. But it’s a topic about which there has been some discussion in the literature. There was this recent comment and reply, for example, about polar tropospheric trends

    http://www.nature.com/nature/journal/v455/n7210/full/nature07256.html

  40. My mistaken attribution above. I should have recognized this as another Goddard post.

    Steve: Any calculation of average surface temperature over a geographic region involves some sort of modeling. Your argument is a straw man.

    So is that “yes” you favor using the DMI modeled data (“from 2002 to 2006 data from the global NWP model T511 is used and from 2006 to present the T799 model data are used” or “no” you do not favor using DMI modeled data.

  41. Chris G says:

    “Get a clue. Different instruments, different algorithms, and you expect identical results?”

    Who said “identical”? Except for you?

    Ron Broberg:

    Models vs reality.

  42. Steven Goddard: What program do you use to “wrap” the GISS maps around the globe?

    Additionally, part of the overestimation of Arctic temperature anomalies can be attributed to GISS deleting SST data in areas of seasonal sea ice.

    http://bobtisdale.blogspot.com/2010/05/giss-deletes-arctic-and-southern-ocean.html

    One would also think that there are other major effects, such as: the permanent sea ice has an albedo of “x” but the data that GISS is extending out over the seasonally ice-free portions of the Arctic Ocean onto the permanent sea ice comes from surface station locations with albedos of “y” and “z” and that those areas warm more during the summer than the permanent sea ice. Do you know of any papers discussing this?

  43. Chris G says:
    July 28, 2010 at 8:29 am
    “How can two different techniques show significantly different results?” – AW

    Get a clue. Different instruments, different algorithms, and you expect identical results? In what way are they significantly different, other than one is a global average and the other only covers the arctic?

    It is the trends that matter, and trends across all the data sets are toward higher temps.

    And why does Goddard pick June?
    Here

    http://data.giss.nasa.gov/gistemp/maps/

    Pick any time period you want and see for yourself if Goddard’s conclusions hold for other months.
    —————————
    The question wasn’t ‘why are there differences,’ because, as you note, of course there will be differences. The question is ‘why is the difference so significant?’ GISS shows June temperature anomoly as +4C higher, while DMI shows a slightly negative anomoly over the same time period. And, if you look at other months, as you suggest, the descrepancy increases. May: GISS show much hotter Arctic, DMI shows no anomoly. Same for April and March. It’s not that they are different, it’s that they are VASTLY different in magnitude and sign.

    And, yes, all trends show things getting warmer. The discussion is on the rate of warming, not that warming is occurring. GISS bases their elevated rate of warming on their Arctic reconstuction, which is at odds with DMI’s.

  44. Goddard

    “Any calculation of average surface temperature over a geographic region involves some sort of modeling.”

    That’s an extremely weak statement. Reanalysis products like ERA40 use numerical weather models.

    Those bear absolutely zero resemblance to the simple weighted averaging done by GISS to calculate temperature anomalies.

  45. Chris G says:
    July 28, 2010 at 8:29 am

    Your arguments are a prime example of why the public does NOT trust NOAA and GISS any more, to be truthful and unbiased agencies. The Gulf isn’t the only place in need of cleanup.

  46. Chris G says:
    July 28, 2010 at 8:13 am

    “What makes you think they use 1200 km smoothed data when the calculate the temperature?”——–Answer, because they said they do. http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=6&sat=4&sst=3&type=anoms&mean_gen=06&year1=2010&year2=2010&base1=1958&base2=2002&radius=1200&pol=reg Sources and parameters: GHCN_GISS_ERSST_1200km_Anom06_2010_2010_1958_2002

    “…. It’s the media hyping up the 0.03 or whatever record; the scientists are just reporting the latest numbers.”

    Yes, and the media hyping is done with full knowledge and approval of the “scientists” All the while, we rarely hear from GISS that the hyping needs toned down.

    “What matters is that the earth is still hot, and hasn’t cooled any, despite the fact that we’ve been getting less energy from the sun for some years.”

    Or what matters is that the earth has finally warmed up to a more normal temperature.(If you insist on the subjective nature of your statement.) I believe a little warmer would be optimal, but again, that is a subjective statement by nature.

  47. Spellbound says:

    You have a number of strawmen in there. Just to address the last, but where did anyone claim that the globe hasn’t warmed? I do believe that the discussion is centered around ‘how much has the globe warmed.’
    ——————————————————————————————————-
    Steve,
    This seems like a reasonable request. What is your best estimate of how the global temperature has changed over the last 100 years? Or do you think the temperature records are inadequate to say anything about global temperature trends?

  48. Matt says:
    July 28, 2010 at 7:39 am
    Er – you do realize that the DMI numbers are also based on interpolated model data right? They’re not raw numbers from a single measurement – thats why its called a reanalysis. Read this to understand where the ERA40 values come from – http://www.mad.zmaw.de/uploads/media/e40Overview.pdf.

    REPLY: Oh yes we know that. But the reanalysis data doesn’t seem to have the same sort of problems that GISS has. How can two different techniques show significantly different results? That’s the point. – Anthony

    The problem is Anthony that Steve has had to be dragged kicking and screaming to finally admit that DMI is an interpolated measure just like GISS is. He has always hammered GISS for interpolating but never a word about DMI which he lets readers believe is actual data. The two techniques show differences because, I think anyway, that GISS attempts to eliminate the influence of sea ice which as you know causes a local cold layer below an inversion. DMI, if they use buoy measurements are in fact emphasizing the surface temperature. As long as there is surface ice the temperature will be near 0ºC just above the surface in the summer, regardless of the air temperature higher up. As such it’s not very meaningful, as long as you have ice in your drink it stays cold once the ice is gone it warms up fast.

    DMI is not without problems as Julienne has pointed out:

    Julienne Stroeve says:
    July 27, 2010 at 6:17 pm
    Günther,
    I haven’t looked into detail what DMI is doing, but I do know that there is a bias in the ERA-40 time-series of air temperature as shown by responses to the Graverson paper published in Nature a couple of years ago.

    In addition Grant et al. (2008) state:
    However, the ERA-40 reanalysis may not be suitable for trend analysis as it incorporates information from different observing systems such as satellite and radiosonde, which might be inconsistent, in particular with respect to trends. Radiosonde measurements provide vertically resolved temperature profiles in the troposphere, whereas satellites provide information on a weighted average over a thick layer. Furthermore, the ERA-40 assimilation system extrapolates information from data-rich to data-sparse areas, which is less reliable than observations. The ERA-40 reanalysis in the polar region has not been sufficiently validated by in situ observations and documented problems with satellite radiance assimilations over the Arctic Ocean could lead to spurious trends.

    Further reading of their paper shows that trends poleward of 75N suffer from unrealistic values compared to observations.

  49. It is trivial to prove that the GISS data is wrong and the DMI data is correct. If temperatures at the North Pole really were 4C above normal in June, the ice would have been melting like crazy. Webcams showed that it wasn’t.

  50. A suggestion – regarding a data-logging swarm:

    Today, electronic weather measurement equipment comes at a bargain price from any homebuilder store or electronic store on the Internet, as does electronic, automatted data-transmission via mobile phone.

    What about developing kind of a cheap, standardized, solar-/wind-powered weather measurement-boot, then have it built by hundreds of interested laymen / amateurs in kind of “Active Donorship”, then have those boxes shipped to one point vor validation / certification and finally have these boxes spread at 250 km intervals all over the Globe in uninhabitated, as well as in habitated areas, to build kind of an independent reference grit for any future weather measurements?

    Make the data collected by that grit traceable on the internet and accessible in computable forms for scientists’ as well as for laymens’ use, and you have a winner.

    This independent weather grit could become for climatology, what SETI is for astronomy: At least, a thorn in the side – but if it really works, a complete game changer .

  51. DL says:
    July 28, 2010 at 7:13 am
    “Steve,
    The link to GISS say’s June 2012 instead of 2010.
    Unless they are making these maps 2 years ahead of time.”

    If you’re making it up, does it matter which year it’s for?

  52. Why is it so hard for people to say they simply don’t know? We can theorize and postulate until we’re worm food, but until we put some reliable thermometers in the remote places of this earth(where GISS insists most of the warming is occurring) we simply won’t know. I don’t know if it’s getting warmer in the Arctic just like the rest of the population of this world doesn’t know. Neither do we know about South America, Africa, and central Asia. We don’t know because no one has bothered to make an effort to know. That might impact the policy decisions being made. Instead, many, it seems, would prefer we make policy decisions on virtual reality and extrapolated numbers. Is it that many believe its just too difficult to be able to make informed decisions? How is it so easy for people to disregard Hansen’s obvious bias and unquestioningly believe his assertions that have obviously yet to be proven?

  53. What’s up with all the nervousness about the use of the word, “model”? Apparently, that word is commonly used for any averaging or smoothing technique. Sometimes common usage of English is imperfect but still useful for communication.

    If DMI uses buoys in the Arctic to measure temperature while GISS only uses land based stations, the DMI data (and their “modeling” of Arctic temps) should be superior, since the Arctic is mostly ocean.

    You can’t refute this by nit-picking someone’s use of the word, “model.”

  54. It is interesting to note the difference in the contouring between the +/- 80 degree north area and the southern latitudes. Nice and rounded where there is a least some data, and rectangles where there is obviously no data. If one is contouring geochemical data from a soil survey it is the squares and rectangles that are trimmed off, since anyone with half a brain knows it is pure fabrication of the method being used in the contour generation. That is, at least in my opinion as a geoscientist, the way it works.

    PS
    There isn’t a geologist left where I work (even the young ones) that believe in anthropogenic warming …… so yes there is a consensus(sp?), just not what the AGW folks believe.

  55. Frederick Michael,

    The ongoing problem with models is that a model is a tool, nothing more. It can be a good tool or bad tool. But the alarmist crowd very often treats models as evidence.

    Models are not evidence, any more than a baseball bat is evidence of a home run.

  56. I often look at the DMI Artic temperature site and wondered why is it different from the GISS estimates. I have always thought the differences were due to different algorithms and different baselines. Given the large difference in June it would be nice to see an explaination as to the differences between DMI and GISS.

  57. Philip Finck

    I used to be a geologist for many years, and agree with your assessment. I don’t know any geologists who still take CAGW seriously.

    I was fooled for decades, until I started looking at the data for myself.

  58. Steve: It is trivial to prove that the GISS data is wrong and the DMI data is correct. If temperatures at the North Pole really were 4C above normal in June, the ice would have been melting like crazy. Webcams showed that it wasn’t

    So is that a ‘yes’ you favor using modeled data in globally gridded anomaly products?

    Why are you being so evasive in answering this question?

  59. AWL,

    I have no idea how the globe has warmed over the last 100 years. The more I probe, the less I trust the published data.

    One the one hand, GISS believes that they can model the whole world based on a few points. Schmidt said “a dozen” I believe.

    On the other hand, they want us to believe that the 1930s warmth was limited to the US and Greenland.

  60. The common theme being presented here is that the high arctic is much colder than the GISS data would lead you to believe. The statement given in a prior thread that the area is “frozen solid” and, by implication in the graph above, that little or no melting occurs below 273.15 K, should be met with some skepticism. Watch the movie in the link below, from the North Pole PAWS buoy, and judge for yourself.

    http://www.arctic.noaa.gov/np2010/cam2-2010.mov

    The Northern Cryosphere continues to lose ice at an alarming rate and will likely reach a minimum this year that is well below the 1979-2000 average. The vagarities of different Global Warming temperature analyses, although all are in very close agreement, will not alter this eventual outcome.

  61. Just take temperature measurements where they are. Don’t adjust them and just look at how they change over 160 years. Make no pretence that you can construct a global average temperature. What you measure is what is happening. Why it’s happening and what will be the consequence is a guessing game. I’ll admit that CO2, the sun, aerosols, earths magnetic field, urban heat islands, CH4 etc etc have an influence on climate but whether it will prove dangerous is a guess. I’m now totally fed up with pundits pontificating on what will happen…they are like snake oil salesmen. And when it comes to using these pontifications to determine global energy policy I despair.

  62. H.R. says:
    July 28, 2010 at 9:32 am
    DL says:
    July 28, 2010 at 7:13 am
    “Steve,
    The link to GISS say’s June 2012 instead of 2010.
    Unless they are making these maps 2 years ahead of time.”

    If you’re making it up, does it matter which year it’s for?
    ===============

    How lucky we are that AGW “scientists” practice pure science.
    They did not mak it up ;-)

    http://whatreallyhappened.com/WRHARTICLES/globalwarming2.html

  63. Steve:

    You keep saying that DMI has buoys north of 80º. Could you point me to where you’re seeing this fact? I haven’t seen it indicated anywhere.

  64. Frederick Michael says:
    July 28, 2010 at 9:37 am
    What’s up with all the nervousness about the use of the word, “model”? Apparently, that word is commonly used for any averaging or smoothing technique. Sometimes common usage of English is imperfect but still useful for communication.

    If DMI uses buoys in the Arctic to measure temperature while GISS only uses land based stations, the DMI data (and their “modeling” of Arctic temps) should be superior, since the Arctic is mostly ocean.

    You can’t refute this by nit-picking someone’s use of the word, “model.”
    ————Reply:
    My intent here is not to be insulting, but I’m certain you know nothing about “modeling”. As a geologist I’ve used modeling on everything from geochem samples to grade-control blastholes to deposit reserves (and I did this for more than 20 years). The latter results I fed into stochastic valuation programs (another form of modeling) that measures parameter sensitivities, among other things. And I could make these models say practically anything I wanted (even if the data was factual), especially if I had an agenda.

    So models are grand and models are tools but models are not necessarily a smaller or virtual version of reality. Quite often models are someone’s grandiose projection and are worse than erroneous data because the methodology easily allows hidden deception.

  65. “The question is ‘why is the difference so significant?’ GISS shows June temperature anomoly as +4C higher,…”

    I don’t know; maybe it’s because DMI is estimating the temperature of the sea and GISS is estimating the temperature of the air.

    James Sexton says:
    July 28, 2010 at 9:12 am

    “What makes you think they use 1200 km smoothed data when the calculate the temperature?”——–Answer, because they said they do.

    You linked back to a graph of 1200km smoothing. Goddard makes the argument that the global record, if a new record is set, will be the result of said smoothing and interpolation over 80 north, (and I suppose 80 south as well). What I’m asking for is some indication that the calculations used to arrive at the global mean are using data from the 1200km smoothing rather than the original data used to produce that smoothing. You haven’t provided that any more than Goddard has given us a calculation showing what the result would be without the model-generated data.

  66. I mentioned this on another posting, but I’ll mention it again. DMI is based on a numerical model that assimilates whatever real observations are available into it’s forecasting model, and it will include rawinsonde and satellite observations when possible. It is not the same thing as comparing the interpolated surface temperature observations from surface stations in GISS. The forecasting model is the ERA-40 (you can find this out by going to DMI’s website).

    From Peter Thorn at the Met Office:
    Reanalyses are numerical weather-prediction systems run in hindcast
    mode considering all globally available observations. Strenuous
    efforts are made to take account of both time-varying biases in the
    data and the impacts of the very substantially changing mix and
    coverage of observations. However, many aspects of the long-term
    behaviour of reanalyses remain unreliable and their suitability for
    use in monitoring atmospheric temperature trends has been questioned
    by a recent expert panel.

    In addition, studies have shown that:
    (1) data from satellites and weather balloons indicate that the ERA-40 trends
    are increasingly unrealistic polewards of 62 N;
    (2) that the other reanalyses datasets exhibit very different polar trends; and
    (3) that the vertical profile of polar trends in ERA-40 is unrealistic, particularly
    above the troposphere.

  67. EFS_Junior says:
    July 28, 2010 at 8:47 am
    One metric (DMI) uses air temperature over the ice, while the other metric (GISS) uses SST under the ice.

    Reading through all the comments, DMI uses bouys in the Arctic ocean (over the ice) and GISS uses land based stations stretching out 1200km into the ocean.

  68. stevengoddard says:
    July 28, 2010 at 9:31 am

    “It is trivial to prove that the GISS data is wrong and the DMI data is correct. If temperatures at the North Pole really were 4C above normal in June, the ice would have been melting like crazy. ”

    Interesting, average temp for the north pole is around 0 C in July. Recently, it is +11, (Well , that’s in the camera, which could be above ambient air, but then, in is cloudy.) and I do see meltwater in the view.

  69. My don’t we have a lot of trolls in today. Must be the hockey stick thread over at RC that’s got them all lashing out.

    DMI says its cold, giss says its four degrees warmer than it “should” be. Is it, is it really?

    Of course not or the arctic would be slush.

  70. I forgot to add that Poleward of 80N there is very little in situ data available to constrain the model. You don’t have many rawindsonde’s that far north, and satellite angles are off-nadir. There have always been problems with the surface data in reanalysis because of very little surface data available, so if you’re doing to look at temperature trends from reanalysis datasets, it is best to look at air temperature at 925mbars since studies have shown those to be more accurate than at the surface level.

  71. Steve: You use the term interpolate for GISS Arctic data. Yet appears that GISS is extrapolating. Which is it in your opinion?
    Can you explain your use of terms in light of the New Oxford Dictionary definitions below:

    “interpolate |inˈtərpəˌlāt|
    verb [ trans. ]
    insert (something) between fixed points : illustrations were interpolated in the text. See note at insert .
    • insert (words) in a book or other text, esp. in order to give a false impression as to its date.
    • make such insertions in (a book or text).
    • interject (a remark) in a conversation : [with direct speech ] “I dare say,” interpolated her employer.
    Mathematics insert (an intermediate value or term) into a series by estimating or calculating it from surrounding known values.”

    “extrapolate |ikˈstrapəˌlāt|
    verb [ trans. ]
    extend the application of (a method or conclusion, esp. one based on statistics) to an unknown situation by assuming that existing trends will continue or similar methods will be applicable : the results cannot be extrapolated to other patient groups. | [ intrans. ] it is always dangerous to extrapolate from a sample.
    • estimate or conclude (something) in this way : attempts to extrapolate likely human cancers from laboratory studies.
    • Mathematics extend (a graph, curve, or range of values) by inferring unknown values from trends in the known data : [as adj. ] ( extrapolated) a set of extrapolated values.”

    Thanks.

  72. Any use of reanalysis products, or products such as GISS needs to take into account the limitations of each data set. And given the differences in availability of data sources over the ERA-40 data record, it would be best to use the data from 1979 onwards when the satellite data record was assimilated into the ERA-40 model so at least ther.e is some consistency in processing. Biases will be introduced by inconsistent availability of in situ data throughout the data record.

  73. comparing the 2 globes, you can see a nice blockish structure to the 250 km version.

    One would expect to see even more obvious blocks with the 1200 km version, but you don’t. There’s some kind of smoothing that creates gentle curves. Which also serves to mask the sparseness of the data, imo.

  74. Julienne,

    If temperatures at the North Pole had of been between 3-5C during June – as GISS seems to be implying – what would that have done to the ice there?

  75. GeoFlynx says:
    July 28, 2010 at 10:15 am
    The common theme being presented here is that the high arctic is much colder than the GISS data would lead you to believe. The statement given in a prior thread that the area is “frozen solid” and, by implication in the graph above, that little or no melting occurs below 273.15 K, should be met with some skepticism. Watch the movie in the link below, from the North Pole PAWS buoy, and judge for yourself.

    http://www.arctic.noaa.gov/np2010/cam2-2010.mov

    The Northern Cryosphere continues to lose ice at an alarming rate and will likely reach a minimum this year that is well below the 1979-2000 average. The vagarities of different Global Warming temperature analyses, although all are in very close agreement, will not alter this eventual outcome.
    ——-
    Sorry, but looking at this movie I see nothing special about melting,

  76. How right Pamela Gray is . Hansen is a slave and an addict in one . Climate-Exaggaration-Operator would be a proper description of his daily routine and this is exactly the way he feels , like a CEO nurturing his uncritical ego . Basically he is a civil servant with taxpayers money enabling him to gather climate-data and to publish it . He should be accountable to the taxpayers and show gratitude that he is allowed such a position in our society . It was all given to him and what did he do in return ? Fool the public ? Is the wish of superiors an excuse ? Or is he trembling because mr BO already fired some of his colleague civil servants who were straightforward and stood for the truth ?

  77. stevengoddard says:
    July 28, 2010 at 12:07 pm

    Julienne,

    If temperatures at the North Pole had of been between 3-5C during June – as GISS seems to be implying – what would that have done to the ice there?

    Steve, the NCEP reanalysis shows 925 mbar monthly mean temperatures for June at the North Pole around 1C, with anomalies of 3-5C. It makes sense that temperatures in June were anomalously warm given the SLP pattern in June. I haven’t looked into detail at the buoy data in that region, but I can talk with Don Perovich when I see him and see if he has any new information about melt rates near the pole from his mass balance buoys.

  78. Michael Schaefer’s suggestion is an excellent idea, I’ve had the same idea too.

    It would be an excellent and natural extension of the Surface Stations project to start putting in it’s own temperature stations.

    What say you all? What say Anthony Watts,

    Also, this one for Steve and Anthony, what is an appropriate “grid resolution” for temperature stations? 10km? 20km? 25km? 50km? 100km? 200km? 250km? R km? What radius R is appropriate for ONE temperature station to obtain accurate results in “interpolating” temperature data? Why would 10km be better than 250km? Why not 1km if we want real accuracy?

    Wouldn’t geography matter? Near a river would be cooler? Up a mountain? In a valley?

    In visiting the Island of Hawaii I learned that it had something like 21 of 22 climate zones, all except Arctic. You can even see them as you drive around, one side of the road is desert like while the other side is lush vegetation jungle like. Wouldn’t all these climate zones need their own (at least one) temperature stations?

    Would all the climate zones across the planet need to be mapped out and temperature stations put there, at the same scale as needed for an island such as Hawaii? Would this actually be an irregular grid depending on these climate zones?

    What are these climate zones anyhow? An article about them and how they can be quite small and next to each other with varying temperatures would be interesting and we might learn something about how to measure temps.

    By the way, Hawaii’s best climate zone is molten rock! ]:)]

    So for a Surface Stations project you’d pick the GPS coordinates where a station needs to be put near and someone would obtain the equipment Node for that location and put it there. It’s kind of like that game GeoCaching. In this case it’s a Temperature Station being cached at the location. Someone or someones will need to attend the location from time to time to pick up the data and / or replace the equipment or perform maintenance on it or make sure it’s still there and hasn’t been poached. So the first step is to draw up a GPS grid tartan map for where stations are to be located near and scout them out with Google Earth for their suitability and legal access and make GPS adjustments as needed. Then obtain the temperature station equipment in bulk and get the volunteers start getting them. Of course it’s best to bill the local governments for the equipment, after all we’re doing their job!

    What say you all?

    Oh, another question, what about all the TV and radio stations that broadcast weather reports? Can’t those be aggregated and used for temperature data? Would that fill in some of the global grid?

  79. Turboblocke

    So you believe that you can accurately gauge the earth’s temperature within 0.01 degrees using 1200 km extrapolations?

    That idea is scientifically farcical beyond comprehension.

  80. In response to:
    “GeoFlynx says:
    July 28, 2010 at 10:15 am”

    That was a neat movie. However it ended before the recent re-freezing spell.

    One thing I wish I could see more clearly is the piling up of that mountain-range pressure-ridge in the far distance. The thing to remember is for every foot those things rise, a root grows nine feet downwards. They were something subs had to avoid, (though I suppose Tom Clancy would have a sub hide behind one.)

    The lead between the near buoy and the far buoy is also interesting, for it appeared quite early, but never grew very wide. Some grow wide enough for subs to surface in open water right at the pole, but this one didn’t. The far buoy drifts over to one side of the near buoy, and then back to the other side, which shows the lead didn’t act like the San Andreas fault and represent a grinding fracture between two major chunks of ice.

    The “internal temperature” of the camera is likely like the air temperature inside your car on a summer day; it tends to be higher, especially in the bright sunshine.

    There must be an external thermometer among all the gizmos they had set up. I wonder why the heck GISS doesn’t use it.

    I think DMI does use data from buoys. When you are looking at a model, you need to think whether actual data is being put in. I respect models that have actual data, without “adjustments.” Otherwise it is garbage you are putting in, and you know what you can expect to come out.

    In the end, the value of a model boils down to whether or not it verifies.

  81. When I hear the statement, “AGW- it’s basic physics!” this is what I think of. I don’t think physicists would be very happy with this sort of intepolated data. Not a good foundation for a theory of how nature works. I find it depressing that this sort of data processing, not too dissimilar to what a novice GIS-user might do without applying too much thought, is the basis for statements like this.

    As a engineer who uses GIS extensively, I run into this all the time. We have a more or less sparse data set – pollution samples, bathymetry, rainfall, whatever – and someone needs a map or a chart or a picture to convey the information. Or, they need to do calculations.

    Fair enough, but then the question always comes back, “Is this accurate?”

    Accurate enough for you, perhaps. After all, where there is no data, there is really no way to know. Go out and check. If we interpolate, we invent data. Nothing wrong with that, and it’s quite useful in many instances, but you have to be straightforward in your explanation and make sure the users know what they are looking at!

    Estimated data values are simply that, and nothing more. And what are they worth? No way to know until you get direct observations.

  82. Thanks for the link to the Hansen paper. I found these two passages in it that seem of interest:
    (4) The cool weather anomalies in the United States in Jun-Jul-Aug 2009 and in both the United States and northern Eurasia in the following Dec-Jan-Feb are close to the cool extreme of the range of seasonal temperatures that are now expected (Figure 17) given the warming of the past few decades. Although comparably cool conditions could occur again sometime during the next several years, the likelihood of such event is low in any given year and it will continue to decrease as global warming continues to increase.

    I thought this is interesting because it’s nothing more than a polemical talking point.

    (5) we suggest a new procedure for use of satellite SST data …We adjust the satellite data by a small constant such that the monthly temperature anomalies of satellite and in situ data are equal over their common area.

    I don’t get this. It makes sense if you are trying to keep your satellite data in sync with your surface data, but it doesn’t tell you anything about which one is more accurate. Given the questions about surface stations that have been raised, this could be no more than the blind data set leading the blind…

  83. stevengoddard says:
    July 28, 2010 at 12:53 pm
    Jim G,

    I don’t know how many data points DMI uses, but there are quite a few buoys up there.

    Here’s a map from the IABP the shows their currently deployed buoys

    http://iabp.apl.washington.edu/maps_daily_tracknsidc.html

    I don’t think it’s comprehensive as there are nonparticipating organizations that also have buoys deployed

  84. Chris G says:
    July 28, 2010 at 10:51 am

    “I don’t know; maybe it’s because DMI is estimating the temperature of the sea and GISS is estimating the temperature of the air.”

    The difference, of course, is that DMI uses real thermometers in the close proximity to deduce the temperatures. While GISS makes up numbers based on thermometers hundreds of miles away.

    James Sexton says:
    July 28, 2010 at 9:12 am

    “You linked back to a graph of 1200km smoothing. Goddard makes the argument that the global record, if a new record is set, will be the result of said smoothing and interpolation over 80 north, (and I suppose 80 south as well). What I’m asking for is some indication that the calculations used to arrive at the global mean are using data from the 1200km smoothing rather than the original data used to produce that smoothing. You haven’t provided that any more than Goddard has given us a calculation showing what the result would be without the model-generated data.”

    Sorry, misunderstood the question. While I don’t have any first-hand knowledge about whether they use the extrapolated data in determining the global mean or not, Jim Hansen’s statement in Stephen’s post (scroll up to close to the top) seems to indicate that they do indeed use the manufactured data.

    “the 12-month running mean global temperature in the GISS analysis has reached a new record in 2010…. GISS analysis yields 2005 as the warmest calendar year, while
    the HadCRUT analysis has 1998 as the warmest year. The main factor is our inclusion of estimated temperature change for the Arctic region.

    So, from the head of GISS, he seems to think the cause of GISS declaration of the warmest year being 2005 is because they included the “estimated” temps from the Arctic. So, not only are they using the manufactured data, it apparently carries significant weight in determining the global mean.

    Hope that clears things up for you,

    James

  85. Part of the problem is that Dr. James “coal fired power plants are factories of death” Hansen and his tribe of data corruptors keep getting away with cooking the temperature books in the MSM. Fortunatley, the MSM keeps losing audience, and web sites like WUWT continually monitor and analyze the bilge coming out of NASA GISS.

    The other part of the problem is the intoxication created by computer presentations, which convert boring climate data into mesmerizing patches of color. Its like CAD drawings: They look impressive, but are they correct?

  86. CE

    “carrot eater says:
    July 28, 2010 at 8:51 am
    If you don’t like the Arctic interpolation in GISS, then use CRU. This is the reason for the slight difference between the two. CRU just leaves the Arctic blank, along with any other empty grid cells. Given that the stations that ring the Arctic indeed are warming faster than the rest of the world, that probably leaves CRU trending too low.”

    As more often than not Carrot gets this one right.

    The simple facts are these. You have a data source ( GHCN) that represents temperature in a given way. A monthly MEAN that is the result of (tmax/tmin)/2
    recorded at a standard time of the day or ADJUSTED via TOBS to a standard time of the day, midnight.

    For the artic region There is a dearth of stations that report data in this fashion.

    You have various choices.

    1. Dont extrapolate or interpolate over this region ( CRU)
    2. Smooth data to fill in the hole. (Giss)
    3. Use alternative data sources.

    if you want to use DMI then you have some work ahead of you. Floating moving bouys ( i’ve looked at that data) or Reanalysis data based on reading at 0Z and 12Z.. not sure you get anything better than just the observation that
    CRU says X.
    GISS says X+a bit.

    Anyways, I’ll take a look at DMI

  87. Julienne

    I check the North Pole webcams almost every day, and there were only a few days in June where there was any visible signs of melting.

    As I am sure you are aware living in Colorado, snow/ice melts pretty fast on a sunny day at 5C.

  88. @GeoFlynx:

    What’s with this movie? Very cool, but a picture isn’t always worth a thousand words. Am I to simply assume that those puddles of water are extraordinary? What evidence does the film present that this is unusual in anyway? It wasn’t exactly open water from what I could see.

  89. Obviously, both methods of measuring the temperature have their own aspects. It is true that measuring temperatures above ice will be kept low in the summer months. However, is that a good reason to throw out the numbers?

    Clearly, GISS thinks so. And, by throwing them out they can create a larger warming trend. But, one has to go back and ask the basic question … what are we trying to measure? I thought it was surface temperature and the changes over time. Given that is the goal then using actual surface temperatures above the ice is the correct choice. Anything else is not measuring the surface temperature. Period.

    Now, there may be questions about the accuracy of DMI, but there is no question that GISS is using the wrong approach.

  90. Chris G says:
    July 28, 2010 at 11:18 am

    Interesting, average temp for the north pole is around 0 C in July. Recently, it is +11, (Well , that’s in the camera, which could be above ambient air, but then, in is cloudy.) and I do see meltwater in the view.

    If you looked more carefully (or more frequently at the pictures from that web cam) you would see that the snow on the left hand side of the pond slopes gently and merges with the frozen surface of the pond. Whereas, in this picture:

    The pond is not frozen and the snow around the edge has been affected by the ripples caused by the wind on the pond.

    If you now examine all the webcam pictures for June and July you will see that not only does it snow quite frequently but also that sometimes the pond is frozen or half frozen and sometimes it is unfrozen.

    This leads me to think that that DMI have a more accurate theory of what’s going on up there. If a huge pancake of ice is melting, how can the temperature in the boundary layer above the ice be anything other zero degees C or close to that. Try it with your next gin and tonic; lots of crushed ice in the glass, suspend a thermocouple close to the surface and measure away as the ice melts.

    Furthermore you will note that the webcam takes 3 or 4 pictures each time it switches on; take a look at the camera temperature. It rapidly increases because, funnily enough, it uses power. The temperature sometimes rises 3C in the 30 seconds between shots.

    It’s amazing how much a webcam can tell you isn’t it?

  91. The first map shows GISS June 2010 anomalies smoothed to 1200 km. The green line marks 80N latitude. Note that GISS shows essentially the entire region north of 80N up to four degrees above normal.

    This is a bit misleading. Only about a quarter of the region (the red bit) is between 2 and 4 deg. At least half is only between 0.5 and 1 deg. I doubt the anomaly for the region (based on the map) is much above 1.5 deg. The UAH NoPol anomaly for June is 0.81 deg, so while it’s not exactly in close agreement there’s nothing to suggest that GISS is “way out”.

  92. James,

    “…it apparently carries significant weight in determining the global mean. ”

    Apparent to whom? I’m still waiting for what the difference is between the GISS global data with and without the inclusion of areas between 80-90 north and south.

    BTW, You guys claiming it can’t be 4 C above normal there, please follow my link and note the 11 C temperature and the meltwater.

  93. Richard M

    “Now, there may be questions about the accuracy of DMI, but there is no question that GISS is using the wrong approach.”

    That is not entirely correct. The approach will either OVERESTIMATE the the warming or underestimate the warming, since no estimate is perfect. Methods have bias and error.. Its an open question as to wether they overestimate the warming or underestimate the warming. personally I view GISS as an overestimate of the warming in the arctic and CRU as an underestimate.

    That level of uncertainty is not an issue. At some point CRU and GISS will learn to put error bars on all their charts and they will stop making silly claims about “hottest” without explaining the uncertainty in that claim.

  94. Steve, more interesting than the North Pole webcam will be the actual mass balance buoy data. From that you can see how much surface versus basal melt is occurring.

    Once melt starts, the temperatures remain near 0.

    Some interesting numbers that I just looked at, the difference between the maximum ice extent in winter and the most recent ice extent (i.e. July 27th):

    1979-2000: 6.79 million sq-km
    2007: 7.91 million sq-km
    2008: 7.41 million sq-km
    2009: 7.97 million sq-km
    2010: 8.09 million sq-km

    These numbers illustrate that 2010 is continuing the trend of large seasonal ice loss, which is a result of thermodynamics (e.g. surface, lateral and basal melting) and ice dynamics (e.g. compaction, deformation and ice export). Melt onset fields derived from passive microwave do reveal early melt onset this year, which hints at warmer than normal air temperatures.

  95. Reference says:
    July 28, 2010 at 12:16 pm

    Recent 2010 Atmospheric Data near the North Pole from the North Pole Environmental Observatory

    http://psc.apl.washington.edu/northpole/PAWS_atmos_recent.html

    Thanks for the link! I plotted the data and surprise, surprise it bears an uncanny resemblance to the DMI plot. OK, it may from a ‘variety of errors’ but it’s probably a lot better than GISS’s guesses.

  96. pwl –

    I’m happy you support my idea.

    Yes, I think, it should be possible, to build an independend grit – or swarm – of stations collecting and transfering temperature- and humidity-data from otherwise inaccessible areas to a website.

    These stations must be affordable, rugged, maintenance-free over a long time and should be able to transmit data automatically via cellphone of like, so as to avoid extensive care and maintenance, to fetch the data.

    One could even try and fund this independent, global surface climate grit the same way the AGW-proponents keep funding their expensive projects: Sell it to donors as a project to monitor “Climate Change” – which, in fact, it does…

    Beat them with their own bats, I say!

  97. Steve, BTW…I don’t have a problem with you showing the DMI temperatures, we do a similar thing with NCEP data and these data are useful, but they have their limitations and users of the data need understand that and use the data in the way they are intended.
    The only problem I have is the comparison you make between DMI and GISS and use that to prove GISS data are invalid. They are not the same thing, and I’m not even clear that the same atmospheric level is being used in this comparison. More importantly though is that you are comparing two completely different methodologies for filling in missing pixels (extrapolation/interpolation versus modeling), each method having their own biases and accuracy problems. While such intercomparisons can be useful in helping to see if similar data sets reveal similar seasonal and interannual variability, they are not going to tell you which data set is the most accurate w/o also comparing with actual in situ data. I work regularly with reanalysis temperature data (not with GISS data), but I tend to not use the surface fields because of accuracy problems.

  98. Julienne,

    I agree that temperatures over the ice can never get much above 0C because of thermodynamic limitations. That is why I do not find the GISS data showing 3-5C at the North Pole to be credible

  99. Re: my earlier post

    John Finn says:
    July 28, 2010 at 2:54 pm

    I checked the 1958-2002 zonal anomalies above 80N. They are

    81.00000000 0.9818134904
    83.00000000 1.277962089
    85.00000000 1.277962089
    87.00000000 1.277962089
    89.00000000 1.277962089

    So my 1.5 eyeball estimate was a bit high – but not bad.

    I also checked the 1979-1998 anomalies to check againt UAH. They are

    81.00000000 0.8285245299
    83.00000000 1.092934608
    85.00000000 1.092934608
    87.00000000 1.092934608
    89.00000000 1.092934608

    UAH has an anomaly of 0.81 so I reckon GISS is close enough at 1 deg or thereabouts.

  100. Chris G says:
    July 28, 2010 at 2:54 pm

    James,

    “…it apparently carries significant weight in determining the global mean. ”

    Apparent to whom? I’m still waiting for what the difference is between the GISS global data with and without the inclusion of areas between 80-90 north and south.

    BTW, You guys claiming it can’t be 4 C above normal there, please follow my link and note the 11 C temperature and the meltwater.

    Chris, look at HadCrut and then GISS, Hansen said that was the difference. I mean, if you really want me to do the leg work for you and find the un-extrapolated data, compile it, average it ect., I’d be more than happy to, but my contract services aren’t very cheap. Let me know if/when I need to get started and I’ll send a contract right away! But even without the extra money, look at the maps here http://wattsupwiththat.com/2010/07/26/giss-swiss-cheese/#more-22599 . That show extrapolated vs non. Look at the extrapolated areas and guess about how much land mass it covers in percentage of the globe.(I’m guessing 15%, just eyeballing) Then look at the colors that represent a value. Oh, look!, the extrapolated areas are almost exclusively showing a warming anomaly. This necessarily raises the global anomaly quoted by GISS. How much exactly? I don’t really know.

    I don’t think anyone is claiming it can’t be +4 in the Arctic, I’m asserting, declaring, avowing that there is absolutely no freaking possible way GISS can know it is +4 with the thermometers they use in their data. It is a number they pulled out of their posterior and used that soiled number in their formula to determine the global mean. PNS at its best.

  101. Julienne Stroeve says:
    July 28, 2010 at 3:08 pm

    Some interesting numbers that I just looked at, the difference between the maximum ice extent in winter and the most recent ice extent (i.e. July 27th):

    1979-2000: 6.79 million sq-km
    2007: 7.91 million sq-km
    2008: 7.41 million sq-km
    2009: 7.97 million sq-km
    2010: 8.09 million sq-km

    Ever thought it might be because the winter extent is increasing more than the summer extent is falling?

  102. stevengoddard says:
    July 28, 2010 at 12:15 pm
    GeoFlynx

    Ice loss in July has actually been the lowest in the JAXA record.

    GeoFlynx – In the eight years of the JAXA set I would agree that this is the lowest loss in ice extent for July. We may some differences of opinion, but it is nice to know we both appreciate the PAWS movie. Wow, the ITP buoy really takes off. Not a place I’d like to camp.

  103. Julienne

    The last few winters have been very cold with lots of snow and ice. Last winter had the second greatest snow extent on record in the Northern Hemisphere. Ice extent reached it’s peak at the end of March.

    When you start at the top of a high hill, it is a longer drop to the valley below.

  104. Chris G says:
    July 28, 2010 at 2:54 pm

    BTW, You guys claiming it can’t be 4 C above normal there, please follow my link and note the 11 C temperature and the meltwater.

    Still digging, Chris G?

  105. RockyRoad says:
    July 28, 2010 at 10:34 am
    ————Reply:
    My intent here is not to be insulting, but I’m certain you know nothing about “modeling”.

    Not to be insulting, but your use of the words “certain” and “nothing” was surprising. Actually, I do modeling for a living — though it’s military parts inventory, not anything to do with climate.

    People seem to be missing my point. Let me try again.

    Suppose someone uses a smoothing algorithm to apply point measurements of temperature to large areas (e.g., 250 km or 1200 km smoothing). Suppose further that they use the word “model” to describe this method. You might argue that the word “model” is inappropriate for a simple algorithm; I would too. But their use of the word “model” (especially if from a nation where English is not the native tongue) does not fundamentally change anything.

  106. Steve…I agree that if GISS shows surface temperatures on the ice of 3-5C that is too high.

    As for the winter maximum:

    1979-2000 = 15.75
    2007 = 14.74
    2008 = 15.26
    2009 = 15.19
    2010 = 15.34

    It would be hard to argue that 2010 is not continuing a pattern of more seasonal ice loss seen in recent years. In addition, having more winter ice extent should influence summer ice loss somewhat in the sense that you still need to first remove that most southerly ice before you can start losing the interior ice (i.e. taking the southerly ice away leads to more lateral and basal melting, warmer near-surface air temperatures, changes in ice motion, etc.).

  107. Billy, here’s a comparison of July 27th ice extents:

    1979-2000 = 8.96
    2007 = 6.83
    2008 = 7.85
    2009 = 7.26
    2010 = 7.25

  108. Chris G says:
    July 28, 2010 at 2:54 pm

    BTW, You guys claiming it can’t be 4 C above normal there, please follow my link and note the 11 C temperature and the meltwater.
    ==================
    WOW, this is AGW science of its finest, please follow my idea and do the math if you can.
    I’m on the North Pole in Floater Suit ( I use it for ice fishing). Every pocket is filled with “Big oil money”. The temperature in my pocket is waaaaaay above 11 C ( to make it easy for you it is 25C). The question is how much ice the pockets full of big oil money can melt?

    Sorry Anthony for this post , I promise it won’t happen again

  109. Julienne Stroeve says:
    July 28, 2010 at 4:20 pm

    Thank you. A slightly different picture than you were trying to paint.

  110. @ John Finn;

    The LT should be amplified quite a bit compared to SAT during an El Nino. Historically that is the case, so why would it be any different in 2010 in the Arctic during this El Nino?

  111. Steve,

    On the page you link to GISS also show a graph of the zonal mean for June 2010 showing the average anomoly by latitude. That graph puts the arctic June 2010 anomoly at around 1 to 1.25 degrees C.

    (Hopefully the links below work)

    In fact Steve GISS are in some ways consistent with DMI. People can use this tool ( http://data.giss.nasa.gov/gistemp/maps/ ) to generate annual, seasonal and monthly trends from GISS if they want. Here (roughly) are the temperature changes from 1951 to 2009 on a seasonal basis.

    Summer (JJA) 0.7oC
    Autumn (SON) 2.0cC
    Winter (DJF) 1.6oC
    Spring (MAM) 1.6oC

    Which give an overall annual change from 1951-2009 of about 1.5oC.

    June itself has only seen an ~0.8oC temperature increase from 1951-2009. Stating GISS’s June anomoly is running upto 4oC is a little misleading and is worth a correction to the actual figure. There is no point exxagerating the GISS exxageration.

    The trends outside of summer in the arctic are probably the main concern for the warmists and worth challenging.

  112. stevengoddard says:
    July 28, 2010 at 9:31 am

    It is trivial to prove that the GISS data is wrong and the DMI data is correct. If temperatures at the North Pole really were 4C above normal in June, the ice would have been melting like crazy. Webcams showed that it wasn’t.

    Those in the real world know the answer. Those in the virtual/global warming world can’t come up with the right answer. Fine with me. That just shows every day folk what global warming is all about. Thank you global warming for showing us your cards.

  113. Julienne Stroeve says:
    July 28, 2010 at 12:48 pm

    It makes sense that temperatures in June were anomalously warm

    I thought it was about July?

  114. Billy, I’m not trying to paint any picture but to show the actual data. There is a lot of discussion on this site trying to argue that the summer ice cover in the Arctic is recovering, and the data do not support this (note that currently 2010 is the 2nd lowest ice extent at this time of year since routine monitoring began by multichannel passive microwave sensors in October 1978). Seasonal ice loss remains higher than it was 10-20-30 years ago.

  115. It seems that talking about ice in the Arctic from 1979 to ~1984 and comparing it to how much ice there is there now is not a fair comparison. Arctic ice from 1979-1983 was coming out of a PDO (-) and ice there now is coming out of a PDO (+). Of course there will be a difference in numbers.

  116. For a fairer comparison, a fairer base line, 1979to ~1983 should be taken out.

  117. DR says:
    July 28, 2010 at 5:01 pm
    @ John Finn;

    The LT should be amplified quite a bit compared to SAT during an El Nino. Historically that is the case, so why would it be any different in 2010 in the Arctic during this El Nino?

    This is true for the tropics but not in the arctic. Plot the UAH tropics anomalies and the NoPol anomalies. There’s a clear ENSO sign;l in the tropics – nothing in the arctic. In any case, my main point is that this whole thing has been overblown. The red region represents a temp range of 2-4 deg – NOT 4 deg. What’s more the red region is barely 25% of the region above 80 degN.

    The GISS June anomaly for the region above 80N is 1 to 1.3 deg depending on which base period you use.

  118. stevengoddard says:
    July 29, 2010 at 5:08 am
    John Finn

    It is thermodynamically impossible to have summer anomalies of +2-4C in the region above 80N.

    Which of course is not true, you can’t make such sweeping categorical statements Steve!

  119. stevengoddard says:
    July 29, 2010 at 11:01 pm
    Phil,

    Which of course is true, as Julienne and Gavin Schmidt have both pointed out.

    What has actually been pointed out? You said “It is thermodynamically impossible to have summer anomalies of +2-4C in the region above 80N”. I would agree with Phil on this though perhaps not for the same reason . An anomaly is dependnat on the baseline average. I’ve looked at one or two of the few 80+deg N stations in the GISS database and it’s clear that 2+ deg summer (JJA) anomalies are possible and do happen, so it’s quite likely that 2+ deg anomalies happen for a single month as happened in this case. I’d also like to remind you that it is not the entire region that had a 2+ anomaly – only about a quarter of it.

    The 1958-2002 GISS anomaly for the region above 80N is ~1.3 deg. Do we know what the DMI anomaly is and do we know how the 1958-2002 baseline averages compare.

  120. stevengoddard says:
    July 29, 2010 at 11:01 pm
    Phil,

    Which of course is true, as Julienne and Gavin Schmidt have both pointed out.

    I doubt that very much, for example there’s no reason why you can’t have such an anomaly over open water north of 80ºN, or equally above the inversion layer. As I said such a generalization is unjustifiable.

  121. Steve, you have sucked them in. They’ve taken the bait hook, line, and sinker (I think the hook is set deep in their lower intestine). The AGW fools are arguing what “could” be (based on their models and extrapolations), without making a single attempt at collecting verifiable data. I especially like the fact that you’ve got one of the bigger fish on the line, Julienne. Not a troll, but a true believer. They desperately need the arctic ice to melt! Too bad the PDO had to reverse, and we face a solar grand minimum. They are completely distracted from whats transpiring in the SH. Again, too bad!

  122. Steve ~

    For what it’s worth, I fired off an email to COI, asking if your comparison – and conclusion (“DMI shows essentially the entire month of June below the 1958-2002 mean. GISS shows it far above the the 1958-2002 mean. Yet GISS has no data north of 80N.
    Conclusion : GISS Arctic interpolations are way off the mark.
    “) is valid.

    The response, from Gorm Dybkjær at COI/DMI was as follows:

    “Based on the ‘GISS’ and the ‘+80north’ data only – there is absolutely no justification for that conclusion! As already stated in the blog – the values are not considering the same geographical area. Moreover – as you can read in the attached text – the ‘+80north’ data are biased towards the pole-area temperatures, hence even further away from the area of the ‘GISS’ data. Finally, a cold anomaly in the +80N area is not unlikely to occur even though a warm anomaly is present further south. ”

    “The GISS data are observations and the ‘+80north’ values are modeled values. However – the modeled values are based on all available observations (ground-, radiosonde-, airplane- and satellite- measurements) – so in one way or the other, both values are based on observations. “

    The attached text he refers to was:

    “The ‘plus 80 North mean temperature’ graphs are based on model ‘analysis’ data from a range of global models from the European Centre for Medium-Range Weather Forecasts (ECMWF). The model ‘analysis’ is the best guess of the initial state of the atmosphere and surface (prior to the actual forecast runs), based on all available observation and the physical constrains of the model.

    The ‘plus 80 North mean temperature’ plot are the mean temperature for ALL model grid-points from the 00z and 12z analysis of the applied model. Since the model grid-points are distributed in a regular 0.5 degree grid, the mean temperature values are strongly biased towards the temperature in the most northern part of the Arctic! Therefore, do NOT use this measure as an actual physical mean temperature of the arctic. The ‘plus 80 North mean temperature’ graphs can be used for comparing one year to an other and for comparison to the climate line – in the same plot.

    The data from 1958-2002 are ERA40 reanalysis data and the data from 2002 till now, are obtained from the operational model at the given times! The climate curve is calculated from from the full ERA40 period, 195709-200208. You can find information of the models on the ECMWF web pages (www.ecmwf.int).”

Comments are closed.