Is the NULL default infinite hot?
January 31, 2010 by E.M.Smith
see his website “Musings from the Chiefio”
What to make of THIS bizarre anomaly map?
What Have I Done?
I was exploring another example of The Bolivia Effect where an empty area became quite “hot” when the data were missing (Panama, posting soon) and that led to another couple of changed baselines that led to more ‘interesting red’ (1980 vs 1951-1980 baseline). I’m doing these examinations with a 250 km ’spread’ as that tells me more about where the thermometers are located. The above graph, if done instead with a 1200 km spread or smoothing, has the white spread out to sea 1200 km with smaller infinite red blobs in the middles of the oceans.
I thought it would be ‘interesting’ to step through parts of the baseline bit by bit to find out where it was “hot” and “cold”. (Thinking of breaking it into decades…. still to be tried…) When I thought:
Well, you always need a baseline benchmark, even if you are ‘benchmarking the baseline’, so why not start with the “NULL” case of baseline equal to report period? It ought to be a simple all white land area with grey oceans for missing data.
Well, I was “A bit surprised” when I got a blood red ocean everywhere on the planet.
You can try it yourself at the NASA / GISS web site map making page.
In all fairness, the land does stay white (no anomaly against itself) and that’s a very good thing. But that Ocean!
ALL the ocean area with no data goes blood red and the scale shows it to be up to ‘9999′ degrees C of anomaly.
“Houston, I think you have a problem”…
Why Don’t I Look In The Code
Well, the code NASA GISS publishes and says is what they run, is not this code that they are running.
Yes, they are not publishing the real code. In the real code running on the GISS web page to make these anomaly maps, you can change the baseline and you can change the “spread” of each cell. (Thus the web page that lets you make these “what if” anomaly maps). In the code they publish, the “reach” of that spread is hard coded at 1200 km and the baseline period is hard coded at 1951-1980.
So I simply can not do any debugging on this issue, because the code that produces these maps is not available.
But what I can say is pretty simple:
If a map with no areas of unusual warmth (by definition with the baseline = report period) has this happen; something is wrong.
I’d further speculate that that something could easily be what causes The Bolivia Effect where areas that are lacking in current data get rosy red blobs. Just done on a spectacular scale.
Further, I’d speculate that this might go a long way toward explaining the perpetual bright red in the Arctic (where there are no thermometers so no thermometer data). This “anomaly map” includes the HadCRUT SST anomaly map for ocean temperatures. The striking thing about this one is that those two bands of red at each pole sure look a lot like the ‘persistent polar warming’ we’ve been told to be so worried about. One can only wonder if there is some “bleed through” of these hypothetical warm spots when the ‘null data’ cells are averaged in with the ‘real data cells’ when making non-edge case maps. But without the code, it can only be a wonder:
The default 1200 km present date map for comparison:
I’m surprised nobody ever tried this particular ‘limit case’ before. Then again, experienced software developers know to test the ‘limit cases’ even if they do seem bizarre, since that’s where the most bugs live. And this sure looks like a bug to me.
A very hot bug…
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



“Angelina Jolie and Brad Pitt were even said to be thinking about buying Ethiopia.”
Anomanly? Amanoly? Manomanly? Anonaly?
Nasamoly? IPccpnamoly?
“Dubai’s development has long been criticized by environmental activists, who say the construction of artificial islands hurts coral reefs and even shifts water currents. They point to growing water and power consumption.”
…-
“Is it the end of the world? Nasa picture suggests Dubai globe is sinking back into the sea
By Claire Bates
This is how the world looks like according to ambitious engineers in Dubai, but it is starting to look rather ragged around the edges.
The stunning image of the man-made archipelago was taken by an astronaut far above our Earth on the International Space Station.”
http://www.dailymail.co.uk/sciencetech/article-1247651/World-Islands-Is-end-world-Nasa-picture-suggests-Dubai-globe-sinking-sea.html
Over on chiefio.wordpress.com I’ve added two graphs to the posting. While it already has a November graph in it that has a 4 C to 9.9 C anomaly key (see up top) relative to the default baseline, I’ve added two more specific graphs.
For each of these, the baseline is 1998. Since just about everyone on the planet agrees it was hot (and the warmer camp puts it at record hot) it would be hard for us to be ‘warmer than then’ after 12 years of dropping from what they like to assert is a ‘cherry picked hot year’ for comparison. So, OK, in this case it IS a cherry pick. I’m going out of my way to do the cherry pick of the hottest year with comparable thermometers reporting. We have two graphs. First is the 250 km ‘smoothing’ the second is the 1200 km smoothing.
http://chiefio.files.wordpress.com/2010/01/ghcn12-3_giss_250km_anom12_2009_2009_1998_1998.gif
http://chiefio.files.wordpress.com/2010/01/ghcn8-5_giss_1200km_anom12_2009_2009_1998_1998.gif
I know that those will hang around because I’ve saved them on my blog site.
The first one has a 4-12.3 C ‘hot zone’ while the second mutes that while spreading it around another nearly 1000 km down to “only” 4-8.5 C ‘hot zone’.
So I think this points to the ‘bug’ getting into the ‘non-NULL’ maps. Unless, of course, folks want to explain how it is 10 C or so “hotter” in Alaska, Greenland, and even Iran this year: what ought to be record setting hot compared to 1998…
I’ll leave it for others to dig up the actual Dec 1998 vs 2009 thermometer readings and check the details. I’ve got other things taking my time right now. So this is just a “DIg Here” from me at this point.
But it does look to me like “This bug has legs”…
E.M.Smith (15:56:43) :
“So unless we’ve got a 10 C + heat wave compared to last year, well, I think it’s a bug …”
Wow, you’re fast to come to conclusions. Why don’t you download the numbers behind your graph, since the resolution of the color bar isn’t good enough to tell what the red anomalies are.
Then, since you’re basically comparing Dec 09 to Dec 08, it’ll be extremely easy to check against the station data. Pick a few stations in northern Canada and Greenland, and see how much warmer they were in Dec 09 than Dec 08. Some will be quite a bit warmer, as that is what comes with the extreme AO event this winter: weirdly cold at mid latitude NH, and weirdly warm at high latitude NH.
I bet you that it checks out just fine.
Chiefo,
Thanks for your ideas on how to contact GISS.
John
E.M.Smith (16:48:35) :
You keep digging yourself this hole, because you keep using Dec 2009 without realising that it was weirdly warm in Greenland over this time, just as it was weirdly cold further south.
Repeat your 1998-2009 comparison using June instead of December. Please. You’ll see blue where you have been seeing red.
There is no bug here. Just the Arctic Oscillation event of Dec2009/Jan2010.
So I think this points to the ‘bug’ getting into the ‘non-NULL’ maps. Unless, of course, folks want to explain how it is 10 C or so “hotter” in Alaska, Greenland, and even Iran this year: what ought to be record setting hot compared to 1998…M.
The maps that you generating in the comments are basically a diff between two years. The first one was between 2008 and 2009. The second set between 1998 and 2009.
http://data.giss.nasa.gov/work/gistemp/NMAPS/tmp_GHCN_GISS_250km_Anom12_2009_2009_2008_2008/GHCN_GISS_250km_Anom12_2009_2009_2008_2008.gif
http://chiefio.files.wordpress.com/2010/01/ghcn12-3_giss_250km_anom12_2009_2009_1998_1998.gif
In high latitudes, it may come down to a single station representing an entire grid. And when you look at the difference for that [u]one[/u] station for [u]one month[/u] between just two different years, you can get a lot of variability. I notice that you keep mentioning the red side of the scale, but have you noticed that the left side of the scale reaches even further down than the red scale reaches up? The extreme colds are colder than the extreme hots are warmer?
So if just one grid is 10.3C warmer in Dec 2009 than in Dec 2008 that covers the high end. Likewise, at the low end of the scale, if just one grid is -17.4C cooler that pegs the low end. And it probably is just one grid – one grid with one station that averaged 10C higher in 2009 than in Dec 2008, and one other grid with one other station that averaged 17C lower than Dec 2009 than in Dec 2008.
How does your selected Dec 2009 maps compare with other maps. Well here is NOAA’s [Dec 2009 minus baseline(1961-1990)]. Notice the simliar anomalies. Extreme highs in AK, Newfoundland, and Greenland. Extreme lows in Central Russia. It becomes even more similar when you set the NASA baseline to be the same as the NOAA baseline:
http://tinyurl.com/ydp7zuj
( I hope the tinyurl works, just use the NASA mapper and set the baseline to 1961-1990)
I hate not being able to edit posts. 🙁
Here is the NOAA 2009 anomaly graph
http://www.ncdc.noaa.gov/sotc/get-file.php?report=global&file=map-land-sfc-mntp&year=2009&month=12&ext=gif
carrot eater (16:53:11) :
E.M.Smith (15:56:43) :“So unless we’ve got a 10 C + heat wave compared to last year, well, I think it’s a bug …”
Wow, you’re fast to come to conclusions.
Um “I think it’s a bug” is not a conclusion. “It is definitely a bug” is a conclusion. “I think it’s a bug” is a direction of investigation.
Why don’t you download the numbers …
Then, since you’re basically comparing Dec 09 to Dec 08, it’ll be extremely easy to check against the station data.
Since it is so easy, I look forward to your report after you have done the work. Me? I have dinner to make, a sick cat to tend, Open Office to install on 2 platforms, comments on MY blog to moderate, …
So go right ahead and “dig here”. We’ll all be waiting for your answer.
I bet you that it checks out just fine.
So by your criteria that is “coming to a conclusion”. By mine it is a “statement of direction of investigation”. So which are you doing? Jumping to a conclusion or not? (Best to be symmetrical in how you apply the usage…)
Zeke Hausfather (15:52:25) : “However, the more pernicious accusation that 9999s are bleeding through into the actual temperature data would be much easier to catch in the python implementation unless you really enjoy trying to read through poorly documented Fortran.”
Reminds me of the night I lost a quarter at the corner of 1st & Vine.
.
.
(wait for it)
.
.
I looked for it at 1st & Main because the light was better there. 🙂
It might also be worthing taking a look at the Met HADCRUT3 Dec 2009 anomalies.
http://hadobs.metoffice.com/hadcrut3/diagnostics/monthly/anomaly.png
kadaka (16:24:53) : *sigh*
Why does moderation involve a LIFO stack instead of FIFO?
Because all the topics comments are intertwined in one thread and presented as a LIFO. Either you take time searching for the bottom or just start ‘approving’ as fast as you can. (usually less than a minute or two on my blog). Some things go to the SPAM queue and can take longer. Other times multiple moderators may be working different parts of the queue. Oh, and ‘newby’ moderators will approve the clear cases and leave the ‘unsure’ for experienced moderators, so it isn’t even a LIFO or FIFO… And, personally, I’ll go through and “approve” all the “under a dozen” lines ‘sight read’ comments super fast then go back for the ‘ten minutes for this one long one’ comments as a slow slog. Often LIFO for the short ones, then FIFO for the long ones having then found “the bottom” and working back the other way.
Hey, you asked 😉
@ur momisugly Zeke Hausfather (15:19:40) :
I owe you a huge debt of gratitude. I’m very good with Python, but completely unversed in Fortran, Linux, etc. environments. If you ever make it out my way, your beer is on me.
Zeke Hausfather (15:19:40) :
Would you provide the data that, besides GISTemp, that shows “high arctic temps”?
Steven Mosher:
The color red is a bug?
Um, no.
It does what it needs to do, namely flag missing regions of the ocean. Would you prefer green perhaps?
carrot eater (17:02:59) : You keep digging yourself this hole, because you keep using Dec 2009 without realising that it was weirdly warm in Greenland over this time, just as it was weirdly cold further south.
It isn’t me doing the digging. I can, if you prefer, use a randomly chosen Jan 09 and have the upper bound be 13 C if you like:
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=02&year1=2009&year2=2009&base1=1998&base2=1998&radius=250&pol=reg
Or APRIL and get a 16.5 upper bound:
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2009&month_last=12&sat=4&sst=0&type=anoms&mean_gen=04&year1=2009&year2=2009&base1=1998&base2=1998&radius=250&pol=reg
Would you like me to go through the whole year until I find the highest value? Or just accept that December is the default and not very different from the others either…
But you just keep on digging, you’re doing fine.
There is no bug here. Just the Arctic Oscillation event of Dec2009/Jan2010.
Oh what a wonderful cliff of conclusion you have found to leap off of. I, however, am going to continue to reserve judgment until there is something knowable. For now it remains a ‘direction of investigation’ and anyone with coding / programming experience will tell you that when you have a clearly broken result, especially at an edge case, one ‘reasonable result’, especially at a non-edge case, does not prove ‘no bug here’.
Some very, very quick observations:
After downloading the Python version of the code, it took me about 2 minutes to locate the error handling for values of “-9999”, which in my experience is more often used as a null value than “9999”. The code knows exactly what to do when it encounters a value of “-9999”. It throws it out. There is also error handling for values that are blank.
The parts of the code I’ve scanned thus far turn Fahrenheit into Celsius, so even if a value of +9999 degrees F slipped through, the actual value the code would spit out would be in the neighborhood of 5377 C. I’m pretty sure someone would notice that.
Back into my secret lab….
amino:
How about RSS?
carrot eater (16:53:11) :
“Pick a few stations in northern Canada…”
I think you mean THE station in northern Canada.
Phil M, I used grep to search for 9999.
I was able to confirm it gets used in numerous places, at least in the CCC.
E.g.
step2.py:23:BAD = 9999
and I confirmed that “BAD” then gets used six times within step2.py
Similarly:
step5.py:279: NOVRLP=20, XBAD=9999):
and XBAD then gets used 17 times in step5.py
I found a few cases where they used -9999 instead, but mostly it was 9999.
Using out-of-band values to signify an error is pretty common in computer programming, by the way.
amino:
“Would you provide the data that, besides GISTemp, that shows “high arctic temps?”
http://upload.wikimedia.org/wikipedia/commons/4/43/Atmospheric_Temperature_Trends%2C_1979-2005.jpg
Phil M,
…and without using Wikipedia.
You know why.
E.M.Smith (17:45:31) :
You’re seriously doing it by the upper bound on the color bar? How about picking a certain location.
Did you try June 2009 vs 1998 yet?
rbroberg (17:05:54) :
Anomaly isn’t reliable when dealing with government data. Anomaly, trend, they don’t tell the story.
It’s far more weighty to look at temperatures.
Comparing actual GISS temperatures to other data sets tells the story.
Phil M (18:16:31) :
What does RSS show?
Phil M (18:16:31) :
What set is that?