GISScapades

Guest post by Willis Eschenbach

Inspired by this thread on the lack of data in the Arctic Ocean, I looked into how GISS creates data when there is no data.

GISS is the Goddard Institute for Space Studies, a part of NASA. The Director of GISS is Dr. James Hansen. Dr. Hansen is an impartial scientist who thinks people who don’t believe in his apocalyptic visions of the future should be put on trial for “high crimes against humanity”.  GISS produces a surface temperature record called GISTEMP. Here is their record of the temperature anomaly for Dec-Jan-Feb 2010 :

Figure 1. GISS temperature anomalies DJF 2010. Grey areas are where there is no temperature data.

Now, what’s wrong with this picture?

The oddity about the picture is that we are given temperature data where none exists. We have very little temperature data for the Arctic Ocean, for example. Yet the GISS map shows radical heating in the Arctic Ocean. How do they do that?

The procedure is one that is laid out in a 1987 paper by Hansen and Lebedeff  In that paper, they note that annual temperature changes are well correlated over a large distance, out to 1200 kilometres (~750 miles).

(“Correlation” is a mathematical measure of the similarity of two datasets. It’s value ranges from zero, meaning not similar at all, to plus or minus one, indicating totally similar. A negative value means they are similar, but when one goes up the other goes down.)

Based on Hansen and Lebedeff’s finding of a good correlation (+0.5 or greater) out to 1200 km from a given temperature station, GISS show us the presumed temperature trends within 1200 km of the coastline stations and 1200 km of the island stations. Areas outside of this are shown in gray. This 1200 km. radius allows them to show the “temperature trend” of the entire Arctic Ocean, as shown in Figure 1. This gets around the problem of the very poor coverage in the Arctic Ocean. Here is a small part of the problem, the coverage of the section of the Arctic Ocean north of 80° North:

Figure 2. Temperature stations around 80° north. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the circle at 80°N is about 1200 km in radius, the size out to which Hansen says we can extrapolate temperature trends.

Can we really assume that a single station could be representative of such a large area? Look at Fig.1, despite the lack of data, trends are given for all of the Arctic Ocean. Here is a bigger view, showing the entire Arctic Ocean.

Figure 3. Temperature stations around the Arctic Ocean. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the area north of 80°N (yellow circle) is about three times the land area of  the state of Alaska.

What Drs. Hansen and Lebedeff didn’t notice in 1987, and no one seems to have noticed since then, is that there is a big problem with their finding about the correlation of widely separated stations. This is shown by the following graph:

Figure 4. Five pseudo temperature records. Note the differences in the shapes of the records, and the differences in the trends of the records.

Curiously, these pseudo temperature records, despite their obvious differences, are all very similar in one way — correlation. The correlation between each pseudo temperature record and every other pseudo temperature records is above 90%.

Figure 5. Correlation between the pseudo temperature datasets shown in Fig. 3

The inescapable conclusion from this is that high correlations between datasets do not mean that their trends are similar.

OK, I can hear you thinking, “Yea, right, for some imaginary short 20 year pseudo temperature datasets you can find some wild data that will have different trends. But what about real 50-year long temperature datasets like Hansen and Lebedeff used?”

Glad you asked … here are nineteen fifty-year long temperature datasets from Alaska. All of them have a correlation with Anchorage greater than 0.5 (max 0.94, min 0.51, avg 0.75). All are within about 500 miles of Anchorage. Figure 6 shows their trends:

Figure 6. Temperature trends of Alaskan stations. Photo is of Pioneer Park, Fairbanks.

As you can see, the trends range from about one degree in fifty years to nearly three degrees in fifty years. Despite this huge ~ 300% range in trends, all of them have a good correlation (greater than +0.5) with Anchorage. This clearly shows that good correlation between temperature datasets means nothing about their corresponding trends.

Finally, as far as I know, this extrapolation procedure is unique to James Hansen and GISTEMP. It is not used by the other creators of global or regional datasets, such as CRU, NCDC, or USHCN. As Kevin Trenberth stated in the CRU emails regarding the discrepancy between GISTEMP and the other datasets (emphasis mine):

My understanding is that the biggest source of this discrepancy [between global temperature datasets] is the way the Arctic is analyzed. We know that the sea ice was at record low values, 22% lower than the previous low in 2005. Some sea temperatures and air temperatures were as much as 7C above normal. But most places there is no conventional data. In NASA [GISTEMP] they extrapolate and build in the high temperatures in the Arctic. In the other records they do not. They use only the data available and the rest is missing.

No data available? No problem, just build in some high temperatures …

Conclusion?

Hansen and Lebedeff were correct that the annual temperature datasets of widely separated temperature stations tend to be well correlated. However, they were incorrect in thinking that this applies to the trends of the well correlated temperature datasets. Their trends may not be similar at all. As a result, extrapolating trends out to 1200 km from a given temperature station is an invalid procedure which does not have any mathematical foundation.

[Update 1] Fred N. pointed out below that GISS shows a polar view of the same data. Note the claimed coverage of the entirety of the Arctic Ocean. Thanks.

[Update 2] JAE pointed out below that Figure 1 did not show trends, but anomalies. boballab pointed me to the map of the actual trends. My thanks to both. Here’s the relevant map:

About these ads

218 thoughts on “GISScapades

  1. Oh dear, I’m going to have to read up on what correlation really means. What I think it means does not correlate with the content in this post!

  2. I noticed that anomly of T-surf for DJF 2010 is compared to the average from 1951-1980. Isn’t that comparing Arctic temperatures this past winter to a time-period where Arctic temperatures were anomalously cold? Is this 30-year period an adequate era to define this as “average” Arctic temperature? I don’t think so; that would always skew anomalies to “look” positive.

  3. GISS records show a very different pattern on opposite sides of the Arctic. Most of the eastern Arctic was warmer 70 years ago than it is now, while the western Arctic has generally shown a warming trend – at least until the last two years.

    Hansen uses the ice age scare time frame as his base temperature, which allows him to paint his maps red and brown.

  4. Just goes to show that if you’re not careful (or don’t care about your integrity) you can probably find what you’re looking for. I think confirmation bias is the proper term for it, right?

    Nice work as always Darth Willis the Merciless!

    – From one of your loyal Henchpersons ; )

  5. A correlation coefficient as low as 0.5? That’s getting almost as low as some of the coefficients used in social studies. If we had actual data the curves are likely to be radically different.

  6. I got two words for CO2 is AGW…Atmospheric contraction

    http://news.bbc.co.uk/2/hi/7794834.stm the point of this first link is the supposition that the ionosphere results as the product of solar energy releases beating up the magnetosphere. When solar flux declines the influence of the magnetosphere grows.
    andy adkins (21:27:25) :

    (clarifying-emending- the thought experiment)

    Examining the climatic record in accordance with the Galactic travels of the earth paint the recent Multi-decadal Temperature changes of the Pacific Ocean as being very erratic and thus symptomatic of a Galactic cold signal.

    2. It is a travesty that modern surface temperature records have become political tools that devalue their weather forecasting utility. The curious can find plentiful evidence supporting the conclusion that the most recent warm PDO obscured the ongoing trend to cold. If so there is every reason to expect global temperatures to quickly exceed the cold variances recorded as the peak of the 1940s -70s cold PDO/ cold AMO because the ERBE cold trends will flip the Atlantic’s Multidecadal Oscillation much sooner than previously recorded events (those watching..KNOW that water temperatures in the Atlantic are definitely signaling a capacity for a quick turn).

    3. On ERBE: I interpret the temperature analysis work of Spencer and Christy to be an excellent marker of (a) the changes to Earth’s Radiative Budget (1) higher temperatures of the troposphere and stratosphere ar primarily indicators of greater radiative forcing

    (b) the water vapor/ precipitation/ cloud cover potentials of the atmosphere summate the atmosphere’s capacity to cool the earth through the processes determining the earth’s radiative budget.

    (c) As a response to the increasing gravitational effect had by the sun as an entailment of it retaining more energy during low sunspot cycles (heavier chemical makeup of its core and energy conveyors), High Latitude Volcanic activity increases the density potential of the magnetosphere thereby increasing radiative forcing and the temperature of the stratosphere. To accomplish similar effect, mid and lower latitude volcanic eruptions must (a) be more numerous and frequent (b) be of proportionately greater magnitude

    Without going in to detail for those smart enough to have figured it out, understand why the Carrington Event is only a reflex action potential occurrence and why its in process modern sunspot maximum was a joke.

    andy adkins (12:37:13) :

    P oleward A ccumulating L ava E vents always begin after the winter solstice and follow longitudinal lapping directions toward the new latitudinal summer (oops)
    Dear Anu,
    When you write
    “Similarly, the periodic forcings of the tiny Sun variations in TSI have no longterm effect. Only the inexorable rise of CO2 in the atmosphere have a non-pulse, non-periodic effect on the planets temperature in the 100 to 500 year time frame of interest.” you are being patently ridiculous.

    The most important earth bound conditions affecting temperatures are the heights of atmospheric layers. The taller atmospheric layers are then the smaller the rate that radiation is released into space. Contrarily, the shorter that the atmospheric layers are then the higher the rate that radiation is released into space (actually suck the heat right off the earth: You should be scared. Lindzen and Choi proved it). This is as fundament to accounting for why Lindzen and Choi obtained their results as higher latitude volcanic activity -induced by a Heavier Sun- is to increasing the density of the magnetosphere and thus the increasing radiative releases marked by the higher stratospheric temperatures recorded by Christy and Spencer. (If the upper levels of our atmosphere were not warming-increasing energy transfer to space, then the earth would be in a true period of global warming) (Insiders will be mad about me sharing this secret, but when the AGW community use atmospheric warming to justify their conclusions there is a big collective laugh)

    As a matter of the simple fact of physics, the warmer temperatures are on earth then the higher the concentration of particular atmospheric gases can be. These physical processes also explain why their is a feedback lag of 800 years between the onset of cold and a fall of in CO2. While oceans are cooling they are still releasing water vapor and this slows the filtering of CO2. (the vikings were chased from Greenland ~ 700 years ago….During the 20th century we clogged our oceans with junk that trawlers can clean up and this detritus is interfering with the absorption of CO2 by the Oceans)

    It will surprise the CO2 is AGW community to know that during tall atmospheric conditions (true global warming) CO2 is much less likely to be found near the top…It is just too heavy.

    Niels Bohr….CO2 is not a black body…entropy will change the radiation and it will be released and directed in all directions by atmospheric currents and subjected to Earth’s Radiative Budget.

    Bad science will kill billions if truth continues to be suppressed.

  7. Just about on topic:
    Nenana Ice Classic
    The river usually freezes over during the months of Oct. and Nov. The ice continues to get thicker throughout the winter with the average thickness being 42″ on April 1. Depending on the temperature snow cover, wind, ect., the ice may freeze slightly more and then start to melt. The ice melts on the top due to the weather and on the bottom due to the water flow.
    River: The tripod is planted two feet into the Tanana River ice between the highway bridge and the railroad bridge at Nenana, just up-river from the Nenana river tributary. It is 300 feet from the shore and connected to a clock that stops as the ice goes out.
    Prize: In 1917 railroad engineers bet $800 guessing when the river would break up. Last year, the winners shared the prize money of $303,895. Over $10 million has been paid during the past 92 years. Payoff will be made June 1st, 2009.

    This contest surely cannot be accused of fraud. So plotting the breakup time from 1st January you get:

    http://nenanaakiceclassic.com/

    Not much happens until 1965 when a steady decline begins
    Interestingly it shows the early 40s to be warm.

  8. The thing that struck me was the map- Fig 1. Look at Greenland. It looks like the -.5 European anomaly directly borders the +6 Arctic Ocean anomaly – a direct jump of 6 degrees. That’s the nonsense Hansen’s method results in.

  9. Speaking of P oleward A ccumulating L ava E vents that always begin after the winter solstice and follow longitudinal lapping directions toward the new latitudinal summer, are the earthquakes occurring along the North Atlantic Ridge lava burps http://earthquake.usgs.gov/earthquakes/recenteqsww/Maps/10/325_35.php ….Katla could be an uh oh for the atlantic circulation

    What is postulated is that a stronger magnetosphere amplifies the gravitational affects that the sun has on the mantle and earth’s core ….and that their stimulation increases their magnetic activity that feeds into a spiraling process of vulcanism and other plate tectonics only made possible by the dynamism of the sun that is made possible by its galactic position.

  10. Hmm, isn’t a common correlation coefficient R-squared? Obviously, this can’t go negative…just between zero and one. Are they using R or R-squared here? If R-squared, then the sentence about correlation going down to -1 needs to be changed.

    -Scott

  11. The tree ring circus was extrapolated from one tree? This then may also be close enough. I like the fact they can give us readings to 4 decimal points.

  12. I don’t think you have adequately dealt with inverse correlation. Your explanation sounded funky to me.

  13. Thank you Willis. One of the layers of GISS’s Global Warming layer cake (all covered over with lovely smoothed frosting so we don’t see the cracks and the bits they’ve glued together).

  14. Willis,

    Are those Alaska temps NASA GISS “value added” trends or are they raw data trends? If they are raw data trends, has Alaska really been on such a continuous warming trend?

  15. Three hundred, maybe four hundred, years from now Hansen will be hailed as the Michelangelo of Global Climate Change (though I honestly have no idea why –doesn’t ‘climate’ always change, eventually?).

    After all, the man is an Artiest. It matters NOT what the people want, or even the College of Cardinals, if the Pope likes baby angels, and big burley men and women, who’s going to argue?

    (Hansen could be a lot like J. Edgar Hoover too. Hoover had so much dirt on the crowd in place above him that they were just too afraid to fire him.)

    Art and politics, what ya’ gonna’ do? Hopefully, Science will triumph over the darkness of the World, someday.

  16. Dear Willis,

    As usual a very sharp observation. But I think you can show the mess more adequately by making the 250 mile circles 1200 mile circles. Then you can show that you can pick nice cherries from a long range of stations, all projecting into the void of the arctic. And I eat my boot if this is not what actually happened. Starts looking for well digestible boots.

  17. Ric,

    Correlation – They trend in the same direction. When one goes up the other goes up. That doesn’t however mean that they are going to have the same magnitude or anything close.

  18. vboring (14:50:54) :

    Why not infill using the satellite data?

    I believe that, currently, climate monitoring satelites (UAH and RSS) don’t adequately cover the Arctic region above 80 deg North.

  19. Everywhere warm except places where people who might have any contact with reality actually live. It certainly seems as if they rigged the data so that they could declare that the earth warmed, even though exactly none of western civilization participated in the warming. Incredible audacity.

  20. why not use the satellite data and junk the weather station data…. ? What does the 8 yo satellite show… aatsr …….. surely this should settle the matter? or is there a problem with the data fromt that I do find it difficult to read anything on there site! definitely not layman freindly

  21. vboring (14:50:54) :

    Why not infill using the satellite data?

    Why not just use satellite data?

  22. The issue here IMO is not the lack of stations or even if temperatures are above ‘normal’ or not. The issue is if the anomaly is 0.5’C above normal or 10’C for that matter and normal is -30’C then it is still below F’n freezing. The ice doesn’t care if it is -30 or -29.5. The scary RED blob is all they care about presenting to the gulible masses.

  23. Fascinating as ever Willis. If this is NASA’s idea of accurate data, perhaps it is just as well that they cancelled the moon landing programme.

  24. How did the GISS measure temperature in 1951-1980, when there wasn’t any weather stations in the North Pole, and we didn’t have satelites?

  25. Good stuff.
    To bring it forward, I strongly agree with vboring.
    What satellite data are available and what are they saying?

  26. Temperature alone is a stupid metric anyway.

    BTW. by the GISTemp method. Aberdeen can influence the northern Med and Central Sweden.

    DaveE.

  27. This appears to be a case of “Willis doesn’t believe it, therefore its not true”. Hardly an adaquate basis for evaluating the method. The sort of procedure used by GIStemp, using the correlation structure in the data to fill in the gaps, is not dissimilar to the geostatistical tools used by mining companies estimate how much reserves there are from scattered data. Rather than dreaming up examples where you don’t think (but don’t bother testing) the method will work, there are several ways you could test the method. I know this would run the risk of finding out that Hansen had done something correct, but it would raise this post above the level of argument from personal incredulity. For example, you could try crossvalidating the data – omit a site and test how well its temperature anomaly can be reconstructed from the neighbouring sites using the GIStemp procedure. If the reconstructions have little skill, then you have a post worth writing.

  28. Hansen’s approach to science: “No data available? No problem, just build in some high temperatures …”

    In high school science classes we learned how important REAL DATA is in science. Hansen would fail those classes had he suggested doing what he has published in papers: the fabrication of data.

  29. What exactly the DMI Polar Temperature site measure? They’ve got more than 50 years of data. To my very novice eye, it appears that 2010 to date is average. Can anyone explain why it ended 2009 at 245 K and began 2010 at about 252 K?

  30. ?? I don’t get it. Fig. 1 shows the anomaly, not trends. Isn’t the problem simply that the 1200 km “weighting” is not representative?

  31. Scott (15:20:17)

    Hmm, isn’t a common correlation coefficient R-squared? Obviously, this can’t go negative…just between zero and one. Are they using R or R-squared here? If R-squared, then the sentence about correlation going down to -1 needs to be changed.

    -Scott

    Take a look at the source document. They are using R, not R^2.

  32. Veronica (England) (15:27:05)

    I don’t think you have adequately dealt with inverse correlation. Your explanation sounded funky to me.

    Inverse correlation is not relevant to this analysis, since all correlations used are positive. My explanation was not supposed to be a full dissertation on correlation. If you’d like to clarify my one-sentence explanation of inverse correlation please do, but it is not necessary for the purposes of this discussion.

  33. Doug Badgero (15:29:06)

    Willis,

    Are those Alaska temps NASA GISS “value added” trends or are they raw data trends? If they are raw data trends, has Alaska really been on such a continuous warming trend?

    Raw data. Linear trends are very deceptive. See here, Update 5, for details

  34. Richard Telford (15:54:25) :

    “This appears to be a case of “Willis doesn’t believe it, therefore its not true”. Hardly an adaquate basis for evaluating the method. ”

    Here’s another possible basis for evaluating the method (at least showing that something is wrong): The artic sea ice continues to increase in area (see previous post), which seems to me to cast some serious doubt on all those bright red anomalies up there.

  35. Again, this shows that the notion of a global temperature is not credible.

    I think we should look at at the average of trends of all stations for which we have reliable (unadjusted or UHI-adjusted) data over time rather than look at the trend of a global temperature for which we have much inconsistent, adjusted or interpolated data over time.

  36. paul (15:14:10) :

    You’ve noticed the glaring errors in Hansen’s GISS anomaly maps too.
    He must have something in his code that runs hot anomalies over what should be natural gradations.
    Just another big fat error with GISS.

  37. Doug Badgero,
    That’s a helluva point… Do they homogenize the temps they extrapolate across the entire arctic?

  38. Anyone who has done some serious data analysis by regression methods in the industrial world will understand the problem posed by blind reliance on values of RSqd as an indicator of the practical value or worth of a correlation. RSqd is a measure of /linear/ correlation. If you use it to judge any aspect of the relationship between two variables you are implicitly accepting that this relationship is fundamentally linear. Unless you display the full data plot (that is the individual data pairs) on the plot, together with the fitted line – presumably computed by least squares – and also the confidence intervals for both the line and for any future individual observation, at an acceptable probability level, you will have no idea at all of the practical worth of the relationship.

    This can not be stated often enough. Enlightenment may come only by working through some numbers and producing appropriate graphical displays. I urge anyone who intends to comment on statistical correlation to take the trouble to go through the mechanics (i.e arithmetic) of computing a linear correlation coefficient (and its square), and to study the prediction capability of the the correlation.

  39. Dr. Hansen is an impartial scientist who thinks people who don’t believe in his apocalyptic visions of the future should be put on trial for “high crimes against humanity”
    He doesn’t care what you think, just the CEO’s of large fossil fuel energy companies that are actively fighting the science. [snip]

    Let me know when you confirm that CRU just fills in the missing data with the planetary average anomaly – clearly an inferior approach. Also, that 1987 paper I showed you also shows how they use multiple stations that are within 1200 km to get a weighted, best guesstimate. If there are six stations in the Arctic with temperature anomalies ranging from 0.1 °C to 0.15°C that month, a guesstimate of 0.125 °C for an area 1000 km away, with no direct measurements, is better than a 0.02 °C temperature anomaly which might be the planetary average that month.

    No data available? No problem, just build in some high temperatures …
    If all the closest stations had high temperature anomalies, that’s a better guesstimate than the average of the entire planet. See:

    http://www.cru.uea.ac.uk/cru/data/temperature/

    @vboring (14:50:54) :
    Why not infill using the satellite data?

    Exactly, that’s what GISS does for ocean data now. That 1987 paper was describing how they dealt with the temperature dataset starting in 1880 that had very sparse coverage of some parts of the planet for many decades.

    GISTEMP uses NOAA data for ocean temperatures, see:

    http://data.giss.nasa.gov/gistemp/sources/gistemp.html

    where they explicitly mention using:
    http://ftp.emc.ncep.noaa.gov cmb/sst/oimonth_v2 Reynolds 11/1981-present

    Here is the background info on how they use NOAA satellite data for ocean surface temperatures using a complicated method called “optimum interpolation”, cross-checked with in situ measurements by ships and buoys, and how they calculate surface temperatures of ocean covered by sea ice: Happy reading.

    http://www.emc.ncep.noaa.gov/research/cmb/sst_analysis/#_cch2_1007145286

    http://www.ncdc.noaa.gov/oa/climate/research/sst/oi-daily.php

    http://www.ncdc.noaa.gov/oa/climate/research/sst/papers/whats-new-v2.pdf

    ftp://ftp.emc.ncep.noaa.gov/cmb/sst/papers/oiv2pap/oiv2.pdf

    Satellites only cover up to 82.5 °N (given their orbital parameters), so there is still a small hole at the top of the world that doesn’t have much coverage, save the occasional Russian icebreaker in summer. Hmm, how should we interpolate this tiny patch of ocean ?
    How about ignore all the closest measurements in the high Arctic, and give it the planetary average ?

  40. Richard Telford (15:54:25)

    Richard, you say inter alia:

    This appears to be a case of “Willis doesn’t believe it, therefore its not true”.

    It is not a question of “belief”. I have given examples of both pseudo-temps and real temperatures that clearly show that it doesn’t work either in theory or in the real world. It’s called science.

    The sort of procedure used by GIStemp, using the correlation structure in the data to fill in the gaps, is not dissimilar to the geostatistical tools used by mining companies estimate how much reserves there are from scattered data.

    This has nothing to do with how mining companies infill missing data. They generally use kriging, which is very, very different both conceptually and in practice.

    Rather than dreaming up examples where you don’t think (but don’t bother testing) the method will work, there are several ways you could test the method. I know this would run the risk of finding out that Hansen had done something correct, but it would raise this post above the level of argument from personal incredulity. For example, you could try crossvalidating the data – omit a site and test how well its temperature anomaly can be reconstructed from the neighbouring sites using the GIStemp procedure.

    I did that. Didn’t you read the post? How well do you think that you could reconstruct the temperature trend of say Fairbanks U using the other stations shown. The average trend of those stations is 0.31°C/decade. The trend of the nearest station (Fairbanks, only 3 km away, correlation with Fairbanks U = 0.75) is 0.44. The trend of Fairbanks U is 0.58 … so if you think you can reconstruct Fairbanks U. from the other stations, good luck.

  41. It has always been a mystery to me why the Goddard Institute for Space Studies builds a “premier” temperature data set that eschews data from space satellites.

    Instead, they use data taken from 4 feet off the asphalt, extrapolate it (i.e. fake it) thousands of miles away from any ground stations, and then massage it (i.e. fake it) so much that it doesn’t really matter what the original data was that they started from.

    Then they use it to determine the fate of the world.

    Can we at least agree that any organization with “Space Studies” in its name should not be responsible for a ground temperature data record? Where are James Hansen’s rockets anyway? Poor Robert Goddard must be spinning.

    Maybe the good Dr. Hansen should change his vocation: “Professor Marvel, Acclaimed by the Crown Heads of Europe — Let Him Read Your Past, Present, and Future In His Crystal Ball — Also Juggling and Sleight of Hand”

    Oh wait, that IS his vocation already.

    (Apologies to the shade of Frank Morgan…)

  42. Wanna know a secret? Governments don’t really give a rat’s fat hairy behind about CO2, AGW or the rest of that bs. If they did, they wouldn’t play games like this: http://www.cnn.com/2010/WORLD/europe/03/25/russia.uk.intercepts/index.html?hpt=C1

    How much CO2 do two TU160’s, and two Tornado’s emit whilst chasing each other around the sky for a couple hours? And why is it that the USA seems to bear the brunt of criticism for AGW, etc. ? Let’s get real, and understand this AGW BS is an entertaining sideshow for public consumption, and bears no relation to what’s really going on. Same old song, same old dance.

    “Britain’s Ministry of Defence released images it said were taken earlier this month of two Russian Tu-160 bombers — known as Blackjacks by NATO forces — as they entered UK airspace near the Outer Hebrides islands off Scotland’s northwest coast.

    It said the March 10 incident, which resulted in crystal clear images of the planes against clear blue skies and a dramatic sunset, was one of many intercepts carried out by British Royal Air Force crews in just over 12 months.

    “This is not an unusual incident, and many people may be surprised to know that our crews have successfully scrambled to intercept Russian aircraft on more than 20 occasions since the start of 2009,” Wing Cdr. Mark Gorringe, of the RAF’s 111 Squadron, said in a statement.

    The RAF said two of its Tornado fighter jets from its base at Leuchars, on Scotland’s east coast, were dispatched to tail the Russian Blackjacks as they approached the western Isle of Lewis.”

  43. An experiment proposal — Question why can’t an experiment be run with land based stations, say choose stations on a 750 mile circle, and show the correlation is proved? Wouldn’t that be something like doing real science by putting forth a theory and then running an experiment to verify the theory? USA stations would seem ideal. Maybe even choose multiple experiments with multiple ‘ring’ choices to see if they match.

    Wouldn’t the Arctic experience the same weather discrepancies that a “chosen ring” of normal land based stations would.

    Seems like a lot of the ground observation datasets exhibit a large amount of wishful thinking and little experimental science. And don’t we have huge computers which could do all this data computation/reduction in a flash, assuming you hire other than CRU type people to do the software. In fact high end PCs should give it a good run for the money in accomplishing the tasks.

  44. Great article Willis …
    That arctic hotspot is quite impressive!

    But wait a second, aren’t all the global temperature analyses done based on 5×5 grid cells?
    And isn’t it true that the farther from the equator one goes, the smaller the physical area of each grid cell becomes?
    And then, if you fill in some high temperature numbers in some high latitude cells, those numbers will be over-represented in the subsequent summation process?

    Shouldn’t each grid cell be weighted by latitude?
    Using the actual width of the middle of the cell versus the width at the equator would be a reasonable approximation.

    Perhaps this is being done somewhere in the code, but I have never heard mention of it.

  45. “This appears to be a case of “Willis doesn’t believe it, therefore its not true”. Hardly an adaquate basis for evaluating the method.” – Richard Telford (15:54:25).

    What? Nonsense.

    Doing mining with “guesses”, yes educated guesses, is one thing. Doing Climate Science using the same “guessing” techniques is simply frabricating data that just was not there. This is especially the case when you go on to make extreme and wild claims of doom and gloom and cause out of your wild claims massive amounts of public monies to be spent supporting the wild claims. It’s fraud when you do this knowing that you’re doing it. It’s fraud when you attempt to block other scientists work that refutes yours.

    The problem with the climate data bases is that they have so many fudge factors and so much missing data that it’s not possible to use them for much of anything with any accuracy. You simply can’t use one temperature monitoring station for a 1200 km radius. To do so is like saying that you can take the temperature of the entire planet with just one thermometer. It’s junk science to even attempt to do so.

    Dr. James Hansen, Junk Scientist who Fabricates Data out of thin air.

    Science needs higher standards than this type of slop that Dr. James Hansen peddles. Much higher standards.

  46. Doug Badgero (15:29:06)

    “Are those Alaska temps NASA GISS “value added” trends or are they raw data trends? If they are raw data trends, has Alaska really been on such a continuous warming trend?”

    In short, no.
    there was a step change between 1974 and 1979 of 5 degrees warming. The rest of the world at the time was worried about the ‘coming ice age’. Strangely enough it peaked in 2004 and has dropped backed to the 60 year mean.

    The Analysis done by the Alaska Climate Research Center

    http://climate.gi.alaska.edu/ClimTrends/Change/TempChange.html

    “It can be seen that there are large variations from year to year and the 5-year moving average demonstrates large increase in 1976. The period 1949 to 1975 was substantially colder than the period from 1977 to 2009, however since 1977 little additional warming has occurred in Alaska with the exception of Barrow and a few other locations. The stepwise shift appearing in the temperature data in 1976 corresponds to a phase shift of the Pacific Decadal Oscillation from a negative phase to a positive phase.”

  47. Satellite data goes to 85 degrees or so. Misses most of the permanent arctic sea ice. It’s the surface temp over the permanent ice cover that’s of interest. I wouldn’t think it matters how well the average correlation or trend is between stations 1200 kilometers apart when you have such radical surface changes across the divide such as going from land to open water to seasonal ice to permanent ice. Plus there’s an arctic ozone hole to consider as well. It’s not as big as the antarctic hole but it’s there and the existing stations aren’t under it. That’s another variable that isn’t reflected in correlations between distant stations.

    Hansen must be aware of the above. Of course he’ll just say it doesn’t matter because we know what the trend is because arctic sea ice extent is on a downward trend. But that doesn’t indict CO2. I think Pielke is right on the money in this article:

    http://wattsupwiththat.com/2009/08/21/soot-and-the-arctic-ice-%E2%80%93-a-win-win-policy-based-on-chinese-coal-fired-power-plants%E2%80%9D/

    No source of black soot can make it to the south pole which handily explains why the antarctic interior isn’t warming.

    The good part of soot for the anti-human crowd who thinks people are destroying the planet is that most of the soot is anthropogenic. The bad part for them is that the United States doesn’t produce much of it since the Clean Air Act of 1963 required drastic reductions through particulate filters on industrial smokestacks and clean burning diesel and not using slash & burn agriculture and so forth. So someone else has to get the blame and in all fairness that means someone else is responsible for the cost of amelioration.

    Of course I’m still of the opinion that no one has a cost benefit analysis that clearly shows we should spend one thin dime on amelioration. Polar bear population is actually increasing. A northwest passage is something to be desired. What exactly is the downside of less arctic sea ice?

  48. At this point assuming Dr. James Hansen were to be a reputable scientist he would say, oh, that’s interesting… you are right, I bow down and salute you for refuting a key method that underpins the entire GISS dataset. We will immediately forthwith cease using this technique and we will review all past data sets fabricated by Nasa GISS or that are derived thereof from our work and alert all scientists who have written papers based upon our work and warn the to make corrections to their science accordingly.

    I’m listening… James?

  49. Giss has the center of their most poleward sub-boxes at 84.3n and they are 9 degrees wide. The sub-box bottoms are at 81.9n. Since any station within 1200 kilometers of 84.3n is used in calculating the value for those sub-boxes, stations as far south as 73.6n can have an effect on the polar sub-boxes. This results in the effect being propagated 1800 kilometers to the pole. The effect is minimal at that distance if other closer stations are available, but there are occasions when a polar box will have only one station which may be at extreme distance which qualifies. In that case its full value is used. Isachsen NW 78.78n early in the century comes to mind as an example.

    Side note:
    Giss also has a rule which states that if a station is within a sub-box it it used to calculate that sub-box anomaly. This is targeted at one station. Amundsen at the south pole. That is why you always see total coverage from the pole to 82s when the smoothing radius is set to 250 kilometers. Amundsen is actually 635 kilometers to the sub-box centers and otherwise wouldn’t be shown at all.

  50. JAE (16:06:04)

    ?? I don’t get it. Fig. 1 shows the anomaly, not trends. Isn’t the problem simply that the 1200 km “weighting” is not representative?

    Jae, you are correct. However, at the moment, the anomaly and the trends are quite similar, and I couldn’t find a GISS map of the trends. Here’s the closest I could find, zonal trends by month. I didn’t know if most folks would be able to interpret it, so I showed the anomaly map.

    As you can see, they suffer from the same problem, which as you point out is the incorrect application of the 1200 km extrapolation.

  51. Just as I suspected.

    Global Warming a Boon for Greenland’s Farmers

    Same thing my buddies up north have been telling me about global warming -the more the better.

    Quite frankly the fact that the high northern latitudes are warming faster than anywhere else is I believe more like a godsend coming at just the right time in just the right place to boost agricultural output in order to feed a growing population. Maybe I should invest in land in Siberia in anticipation of my granchildren being able to lease it out for agriculture.

  52. Huh. The post is about GISS, GISTemp is used, yet E.M. Smith has not yet expertly commented, nor has carrot eater proclaimed the divine infallibility of Hansen’s work as confirmed by the genius of St. Tamino which incontrovertibly shows that E.M.S. is an incompetent ignorant buffoon (to use the language I’ve seen used on Tamino’s site).

    Thus this thread is too young. I’ll check back later.

  53. I have a question about the goal of temperature trend interpolation. Interpolation and homogenization make for pretty pictures, but do they tell us anything we need to know? Instead, it seems to me that any significant “global” temperature trends would readily appear in statistical correlations of thermometers as long as the data of the two thermometers can be compared over similar time spans. This should have the added benefit of not throwing away useful discrete environmental information for individual thermometers that reveals correlations to local changes such as urbanization, agricultural changes, etc.

  54. Regardless of the validity of their extrapolations, GISS more or less correctly shows much of the Arctic and most of the Antarctic cooling over the last 80 years in their trend maps. The red area over Russia is probably incorrect.

    The areas around the Pacific warmed a lot during the PDO shift in 1977, and this is reflected by red colors in Canada and Alaska.

    The big problem is with the GISS baseline period, which was largely a period of unusual cold, so the anomaly maps always look warm.

  55. [snip - sorry Willis, I'm not going to let this turn into a tobacco war started by Anu, his comments have been snipped as well - Anthony]

  56. Anu (16:32:13)

    @vboring (14:50:54) :

    Why not infill using the satellite data?

    Exactly, that’s what GISS does for ocean data now. That 1987 paper was describing how they dealt with the temperature dataset starting in 1880 that had very sparse coverage of some parts of the planet for many decades.

    We’ve been over this several times. GISS does not do this now for the Arctic Ocean, for two reasons. The satellites don’t go that far north. The satellites don’t measure ice temperatures.

    Additionally, the GISS temperature trends are given from a base period of 1951-1980. If you have satellite temperature measurements of the arctic ocean for that period, please cite them …

  57. Willis,

    Thanks. I see now that the linear trend lines were just to make a point. The underlying data is not linear.

  58. Anu (16:32:13)

    Satellites only cover up to 82.5 °N (given their orbital parameters), so there is still a small hole at the top of the world that doesn’t have much coverage, save the occasional Russian icebreaker in summer. Hmm, how should we interpolate this tiny patch of ocean ?
    How about ignore all the closest measurements in the high Arctic, and give it the planetary average ?

    How about we just leave it empty, along with all the other parts of the planet that don’t have temperature stations, since we don’t know what the temperature is?

    How about you notice that since much of the earth has only occasional widely scattered temperature stations, we don’t know the “planetary average”?

    How about you deal with the fact that despite high correlations, nearby stations can have very different trends, so using one station to extrapolate out 1200 km can give a very wrong answer?

    These questions and more …

  59. Look at all the grey in the maps over ocean areas. GREY???? Grey isn’t on the scale. What does grey mean? I can only suppose that it is supposed to represent areas of insufficient data.

    But … but …. you see the issue don’t you.

    Why has the Arctic been filled in while these other ocean areas of insufficient data have not. Wouldn’t it have been more honest to leave the arctic grey as well?

  60. Anu (16:32:13)

    Let me know when you confirm that CRU just fills in the missing data with the planetary average anomaly – clearly an inferior approach.

    I don’t recall saying that they did. Take a look at Figure 1. Do you notice the areas in gray? Those are areas of missing data. They are not filled with anything, not the planetary average, nothing. CRU does the same, as far as I know, when data is missing.

    Or as I quoted Trenberth saying above,

    In NASA [GISTEMP] they extrapolate and build in the high temperatures in the Arctic. In the other records they do not. They use only the data available and the rest is missing.

    Missing. Not “average global anomaly”. Missing.

  61. subtlety.leads.to.confusion (16:44:35) :

    [quote]But wait a second, aren’t all the global temperature analyses done based on 5×5 grid cells?
    And isn’t it true that the farther from the equator one goes, the smaller the physical area of each grid cell becomes?
    And then, if you fill in some high temperature numbers in some high latitude cells, those numbers will be over-represented in the subsequent summation process?

    Shouldn’t each grid cell be weighted by latitude[/quote]

    Not to mention the huge scaling bias using such a map to present such ‘results.’

  62. subtlety.leads.to.confusion (16:44:35)

    Great article Willis …
    That arctic hotspot is quite impressive!

    But wait a second, aren’t all the global temperature analyses done based on 5×5 grid cells?
    And isn’t it true that the farther from the equator one goes, the smaller the physical area of each grid cell becomes?
    And then, if you fill in some high temperature numbers in some high latitude cells, those numbers will be over-represented in the subsequent summation process?

    Shouldn’t each grid cell be weighted by latitude?
    Using the actual width of the middle of the cell versus the width at the equator would be a reasonable approximation.

    Perhaps this is being done somewhere in the code, but I have never heard mention of it.

    Good questions. Different teams use different gridcells. GISS uses equal-area gridcells, GHCN and CRU use 5×5 gridcells. The latter are area-averaged as you say, using the cosine of the mid-latititude of the cell.

  63. Scott (15:20:17)

    R^2 is the coefficient of determination, and roughly translates as the amount of variation explained by the model (0 to 1). So using data with 0.5 R results in using data where one independent variable (the temperature at one station) could explain at most 0.25 or 25% of the other variable’s (other station’s) variation. While this may be useful somehow, this, as Willis explains, says nothing of the trend differences, magnitude of correlation slope, intercept, etc.

    The best way to eliminate this in an experiment is to MEASURE more stuff!

  64. Hmmm…
    Series 0,1,2,3 and 0,5,10,15 are very well correlated indeed, with correlation coefficient r=1. Yet, second the trend is five times larger than the first one, isn’t it? That’s pretty basic stuff… and pretty basic flaw in their reasoning.
    So what happened to the peer review process, how come no-one noticed it before?

  65. Willis: The following is a comparison graph of the GISTEMP (LST+SST) annual zonal mean trends from 1880 to 2009 with 250km and 1200km smoothing. (For some reason, EXCEL didn’t like the data and refused to plot both datasets, so I had to rely on another spreadsheet, one that I’m not familiar with.) The GISTEMP zonal mean trend data shows that the trends correlate well between 55S and 55N. Poleward of those latitudes, the 250km and 1200km smoothed data diverge. In fact, at high latitudes, the zonal mean data with the 1200km smoothing can rise with latitude while the zonal mean data with the 250km smoothing drops. Interesting effect.

    And here’s a gif animation of the two corresponding maps:

  66. Very informative, Willis – loved your gentle instruction on the meaning of correlation and love your posts as always. Just one thing: I got hung up on the caption for figure 3 that describes the Arctic Ocean as being roughly three times the size of Alaska so I checked out the actual areas: the Arctic Ocean is 14.056 Million square km vs Alaska’s area of 1.718 square km so the Arctic Ocean is more than 7 times the size of Alaska. Of course this strengthens your argument regarding the serious problem with the extrapolation of a few data points.

  67. From the arctic research center at the University of Alaska. No ‘slow steady warming’…a huge step in 1975.

    [image]http://climate.gi.alaska.edu/ClimTrends/Change/graphics/temp_dep49-09_F_sm.jpg[/image]

  68. Hmm, the thing that gets me with this is that you might be able to now show strong correlation and trending between two points on the globe 1200kms apart – but this is not guaranteed to be true into the future.. It could just be a pure artifact of sampling and measurement errors as well.

    I also had a look through the GISS code around this 1200km circle stuff and found that they also pull in stations within a 1200km ‘square’ lat/long bounding box containing the 1200km circle as projected onto the globe – stations in this in-box/ex-circle space then get given the same weighting as the furthest station within the circle (if memory serves)… sounds like a fudge to make up for a lack of stations in certain areas..

    Someone should do a recode to show on the globe confidence in terms of mean true distance of contributing stations per cell. that might shed some useful light on where things are going right off the rails and achieving low earth orbit..

  69. OT – congrats!

    “A small group of dedicated people coming from a diverse range of positions and perspectives but working together as a loose federation held together by shared values and beliefs succeeded in accomplishing the most impressive PR coup of the 21st century.”

    http://www.profero.com/unsimplify/index.html

    [Profero]

    “So how did this group of bloggers succeed in bringing the climate establishment to its knees…

    http://curry.eas.gatech.edu/climate/towards_rebuilding_trust.html

    [Judith Curry]

    Quick sip of Champaign and back to work!

    These fools thought they would push through a get rich quick scheme / pay the mortgage in quick time etc., but it’s unravelling fast now.

  70. I use Rsq all the time for seeking correlation (but of course I am a “hard” scientist by profession, and not a climate “scientist”). We usually dismiss as unreliable any Rsq less than 0.8. This means an R LESS THAN 0.9 is Bogus, man!

    For Hansen et.al. – Solution: don’t waste our time with inter- or extra-polations. Just make more measurements, stupid (McubedS) or go back to sleep!

  71. “So how did this group of bloggers succeed in bringing the climate establishment to its knees…”

    Answer: by persistantly presenting contrary evidence despite MASSIVE public funding of their detractors. The ‘truth’ always wins out in the end.

    If the facts change I’ll change my mind. Until then I remain a former believer and now a sceptic.

  72. Dr. Hansen is an impartial scientist

    On what planet would this be believed?

  73. @Dave Springer (16:46:32) :
    I think Pielke is right on the money in this article:

    http://wattsupwiththat.com/2009/08/21/soot-and-the-arctic-ice-%E2%80%93-a-win-win-policy-based-on-chinese-coal-fired-power-plants%E2%80%9D/

    No source of black soot can make it to the south pole which handily explains why the antarctic interior isn’t warming.

    This is an interesting consideration when you realize that sea ice “strength” depends greatly on chemical composition. In fact, the salinity tends to turn sea ice into a less-than-solid-and-easily-broken-up mush as compared to ice on frozen freshwater sources. I’m beginning to wonder if any chemical analysis of typical sea ice has been done to rule out possible changes in chemistry from pollution sources as a contributing cause of the arctic sea ice decline.

    Refer to: http://www.mpimet.mpg.de/fileadmin/staff/notzdirk/Notz_2009_JGR.pdf

  74. Frankly, the conclusion seems obvious: given the exceedingly sparse temperature data sets for the Arctic, the construction of forecasts based on this data can only be regarded (at best) as being ‘unreliable’.

    The more reasonable position to take would be: “on the basis of data that we currently have, we do not know what the recent temperature trends in the Arctic have been. Nor can we say much about what they will be.”

    In the absence of real data to examine, argumentation must rely upon anecdote, hubris, and blind faith.

  75. …..Figure 2. Temperature stations around 80° north. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the circle at 80°N is about 1200 km in radius, the size out to which Hansen says we can extrapolate temperature trends……

    ……Can we really assume that a single station could be representative of such a large area?……

    ……Figure 3. Temperature stations around the Arctic Ocean. Circles around the stations are 250 km (~ 150 miles) in diameter. Note that the area of the Arctic Ocean is about three times the area of the state of Alaska….

    …..they extrapolate and build in the high temperatures in the Arctic……

    ………………………………………………………………………………………………………..

    This is unfair and everyone should know this is being done.

    If James Hansen feels he is so correct to do this with the data then he himself should be proud to tell the world about his method.

    Everyone should be made aware that ‘global warming’ is being manufactured! And manufactured by someone working at ‘NASA’ no less!

  76. I’m just beginning to read the comments, so the following matter may already have been covered, but….. In the following sentence, I believe that the average should be 0.75, not .075.

    “Glad you asked … here are nineteen fifty-year long temperature datasets from Alaska. All of them have a correlation with Anchorage greater than 0.5 (max 0.94, min 0.51, avg .075)”

    IanM

  77. Willis:
    Others may have already got this but the way I see what you have shown is as follows:
    Let’s assume that X11 is the temperature at station 1 at time 1 and X21 is the temperature at station 2 at time 1. Let Y11 be the unknown temperature at a location 1200 KM from station 1 and Y21 the unknown temperature at a location 1200 KM from station 2.
    Lets grant that all possible pairs are highly correlated, i.e., Corr( X1,X2), Corr (X1,Y1), Corr(X2,Y2) , Corr(X1, Y2), Corr(X2,Y1) and Corr(Y1, Y2) are all greater than 0.9 .
    Since we know the actual temperatures at X1 and X2, we can then say that
    X1 = bX2 , where b is the empirically determined coefficient that allows us to determine the temperature at X1 for any temperature at X2 and is equivalent to the slope of the line between pairs of station temperatures.
    In other words if a temperature at X1 was missing we could accurately estimate the missing temperature if we knew X2. All this is great. However when we come to write the equations for Y1 and Y2 we can say Y1 = cX1 and Y2 = dX2. BUT in order to fix the anomaly temperatures for Y1 and Y2 you must know the values of c and d. The trend lines you show suggest that there is no reason to believe that b = c = d and that somehow Hansen must come up with values for b and c in order to actually fix the temperatures at Y1 and Y2.
    You have neatly reminded everybody that Hansen et al have no basis for determining the actual temperatures at Y1 and Y2 by demonstrating that “b” is not a constant across pairs of Arctic stations. A high correlation actually only tells you the level of accuracy you can have in predicting Y from X not the actual level of Y for a change in X.
    For Hansen to infill he must have some basis for determining “c” and “d” which as you show could vary widely. I see no way that he can do this.

  78. Anu (16:32:13) :

    Anu,

    I see you are fulfilling the role of the mindless advocate by playing the cigarette smoke card.

    You have pigeon holed yourself.

    Will you soon be playing the holocaust denial card? Or did you do it already and I overlooked it? If you have then I offer my apologies for short-changing you.

  79. “extrapolating trends out to 1200 km from a given temperature station”

    and

    “The big problem is with the GISS baseline period, which was largely a period of unusual cold, so the anomaly maps always look warm.”

    This is one of many instances of questionable data handling that makes many people skeptical of the theory of AGW. As a prior commenter said, to the casual observer, all of that red infill on the map creates the strong impression of warmth and warming. “Look at all of that red at the Arctic! They must be right about the planet warming.”

    The other item that contributes to scepticism is an even bigger problem: the theory has already been disproven. The theory of AGW had predicted that the global temperature should have risen by 3.8 degrees F by now; it has only risen by 1.4 degrees F.

    http://www.sciencedaily.com/releases/2010/01/100119112050.htm

  80. Steve Goddard (17:12:47) :

    The big problem is with the GISS baseline period, which was largely a period of unusual cold, so the anomaly maps always look warm.

    ………………………………………………………………………………………………….

    And of course we’ve all been told the ‘real climatologists’ around the world only use anomaly.

    Or did they mean ‘RealClimate’ologists?

  81. I am no expert on statistics -it is too many years since I was using T and F tests for significance. However, I thought that a correlation was only significant if greater than 0.9 (or less than -0.9). A correlation factor below 0.66 is not significant to determine cause and effect and should not be used for infilling or extrapolating data. In this case GISS is extrapolating temperatures measured in warmer areas to a colder area without correcting for latitude direction which makes it even more wrong. It is surprising that some of the more honest amongst the climate pseudo-scientists have not questioned the practice. Maybe they are frightened of tackling Hansen and ilk. The data in Hansen’s paper shows large temperature deviations in the higher latitudes which overwhelm any trend.
    Good work Willis

  82. Funny thing is that I was doing some cells imaging today, got a correlation coefficient of 0.89 for mitochondria function and reactive oxygen species; looked at the scatter and so am going to look at the bimodal population statistics. A correlation coefficient of <0.75 generally means that you aren't looking at a direct relationship. I would rather like to see a plot of correlation coefficient vs distance for all of those stations shown in the figure, that are within 1,200 km of each other. Draw a circle around each station and do the cc for each of the stations (NSEW) that are within 1,200 km, then remove it and move to the next one.

  83. What does ‘The Reference Frame’ think about p = 0.05 ?
    A leader is needed to bring all the ‘balls’ together; politically neutral but scientifically literate.Someone one can trust.
    Step forward Prof. Richard Lindzen
    At the very least, demand (somehow) a debate on MS tv.
    A rally of 10,000 voices at the Albert Hall, London.
    Serious damage is about to be inflicted on US and European economies.
    Time is short.
    Come on all you pro. bloggers on both sides of the pond.
    Communicate and lead. (I feel like John the Baptist).
    It reads a bit extreme but I’m quite serious.

  84. Willis Eschenbach (17:18:35) :

    Anu (16:32:13)

    They were fined for not revealing studies that they had done that confirmed the science, and lying about it … so I fear your example is totally irrelevant.

    Interestingly, Anu’s arguments are therefore not irrelevant. They are, in fact extremely relevant. Just not in the way Anu intended, of course: “It’s a travesty” … hide the decline … rather delete the data than … etc, etc, etc.”

  85. Why use the words “Temperature co-relation” when there are perfectly good four letter Anglo-Saxon words that better describe it.

  86. I’m still not sure I understand what GISS actually did. Where did they establish their r’s? Surely they didn’t get their correlation coefficients using data anywhere but within the Arctic circle? A distance of 1200 km is huge, in terms of climate variability, especially if you’re extrapolating. Worse, if GISS is extrapolating pole-wards and using linear fits, they’re really on thin ice. The pole is a geographic and climactic discontinuity. There’s little reason to expect linearity unless Hansen derived his fit using actual data at the pole. If not, then I’d say Dr. Hansen is just guessing and doesn’t know his r’s.

  87. Eisenbach: “Dr. Hansen is an impartial scientist”

    It’s always Marcia Marcia: “On what planet would this be believed?”

    You missed the sarcasm. Willis meant it as such….

    Chris
    Norfolk, VA, USA

  88. Cement a friend (18:51:18) : “…A correlation factor below 0.66 is not significant to determine cause and effect…”

    NO correlation factor, even 1.0000, is sufficient to determine cause and effect. Correlation is not causation.

  89. My basic stats book said to be wary of any linear correlations less that 0.8. Isn’t 0.5 basically a coin-toss as to the degree of association?

  90. Willis:
    You need to show pairs of Arctic stations where their temperatures create a regression line with very different slopes. In reality so long as the correlation coefficients are greater than 0.7, Hansen can pretty much do his extrapolation if he can assume the slope of the regression line. However, the slopes of the regression lines must essentially be the same in order to actually generate the temperature anomalies for the uncharted space. If they are different then I have no idea how Hansen can do his extrapolation. Your charts suggest (though they do not prove) that there is likely to be considerable differences in the slopes of these curves. For example, pairing an open water coastal location with ice bound coastal location should produce a very different slope than two open water or two ice bound locations.

  91. Why not infill using the satellite data?

    Satellites only look “across” (they are in pole-to-pole orbit). Therefore there is no direct lookdown capability (for an unknown reason) and they cannot observe either north or south pole.

    I also understand there are some issues with MW reflection off ice that makes life difficult.

  92. The sort of procedure used by GIStemp, using the correlation structure in the data to fill in the gaps, is not dissimilar to the geostatistical tools used by mining companies estimate how much reserves there are from scattered data.

    Well, that explains why mining companies are so abominably bad at estimating reserves. My own method turns out to be far more accurate:

    1.) Take the given estimate. Multiply it by ten.
    2.) Be sure that step 1 is an underestimate. Probably a gross underestimate.

    Works for everything from tin to oil. (But for projected warming, divide by 10 rather than multiply and assume an overestimate.)

  93. @ Andrew P.
    The issue is not over measurement of todays temperatures. Todays temperatures are easy – not only do we have the buoys you mention, but we also have satellite data. We know the absolute temperature in the polar region quite accurately.

    The issue is over the measurement of the baseline which is subtracted from todays accurate temperatures to determine the `anomaly’ (difference from normal). The problem is that the baseline is set using historical temperature data dating back to the period before we had accurate data from satellites or buoys. So that baseline data is in fact highly innaccurate and constructed using a lot of questionable assumptions, particularly in a region like the arctic where there isn’t a lot of historical data to go by.

    We simply don’t know what the `normal’ temperature (if such a thing exists) for parts of the world should be. At best we have a `guesstimate’. And many here feel that every effort has been made in the construction of such guesstimates to come up with a number as cold as possible to make it look like the world is currently unusually hot.

    Anomalies are differences. The accuracy of a difference is no greater than that of its least accurate component. Subtracting inaccurate historical guesstimates from highly accurate modern satellite data makes the resulting anomalies inaccurate.

  94. Keith G (18:32:14) :
    Frankly, the conclusion seems obvious: given the exceedingly sparse temperature data sets for the Arctic, the construction of forecasts based on this data can only be regarded (at best) as being ‘unreliable’.

    The more reasonable position to take would be: “on the basis of data that we currently have, we do not know what the recent temperature trends in the Arctic have been. Nor can we say much about what they will be.”

    In that case where do you suppose the data for the DMI Arctic temperature linked to on this site come from? Willis shows 3 stations on the edge of the 80ºN parallel.

  95. [snip - we won't be turning this thread into a discussion of tobacco company issues and arguments - A]

  96. This whole issue raises several points, not the least of which is again, the near uselessness of a global average temperature and the problems with expressing it as an anomaly number. Grossly dependent on and subject to manipulation of the base period, and not really a useful metric anyway. There are to many widely different temp distributions that can come up with the same average, even when done in smaller spacial cells.

    Which brings up another point. Even without any manipulation to support a desired outcome (higher temps), either deliberate or inadvertent (selection bias) this is a prime example of what happens when mathematics is divorced from the underlying physical reality at hand. If you draw a line between two points separated by a distance, and say one end of this line has temp A, and the other temp B, what is the temp halfway between? Why, just do a simple interpolation! Mathematically that is valid, but not when the numbers are associated with underlying physical reality and not just isolated scalar metrics. If A is on the top of a high mountain, B is on the ocean, and the point in between is in the middle of a steamy jungle, obviously the situation is a lot more complex and doesn’t yield to simple analysis or interpolation.

    It is a bad practice just on the basis of rigor, let alone when it can be manipulated in ways to support a predetermined outcome. Too much possibility of gross error, but I get the impression too many scientists don’t understand the difference between pure mathematics and math constrained by real world processes and such.

  97. Given that climate science is so politicized, should the guys who politicized it (for example, Hansen and Jones) be in charge of collecting the basic data? Is there anybody less biased who could take over the job of data collection? There has to be an independent organization collecting the data. This org should be staffed by people who are not climate scientists. Maybe the solution is to take climate science out of NASA. It is a shame how Hansen has sullied NASA’s reputation.

    My numerical analysis professor told me not extrapolate.

    Joe Bastardi is a clever fellow. He has opened another can of worms for the alarmists without being the guy who actually presses charges. Well done Joe and well done Willis.

  98. 24 March: UK Daily Mail: Will Stewart: Russia’s top weatherman’s blow to climate change lobby as he says winter in Siberia may be COLDEST on record
    ‘The winter of 2009-10 was one of the most severe in European part of Russia for more than 30 years, and in Siberia it was perhaps the record breaking coldest ever,’ said Dr Alexander Frolov, head of state meteorological service Rosgidromet. ..
    Climate change adherents say the planet is warming due to man-made factors but Russian expert Professor Arkady Tishkov said yesterday that Siberia and the world are in fact getting colder.
    ‘From a scientific point of view, talk about increasing average temperatures on earth of several degrees are absurd,’ he said.
    ‘Of course we can’t say that global warming is a myth and falsification. In many regions of planet the temperature is higher than expected because of human impact.
    ‘But the climate system of the planet is changing according to different cycles – from several years to thousand of years.
    ‘From the scientific point of view, in terms of large scale climate cycles, we are in a period of cooling.
    ‘The last three years of low temperatures in Siberia, the Arctic and number of Russia mountainous regions prove that, as does the recovery of ice in the Arctic Ocean and the absence of warming signs in Siberia.’
    Mr Tishkov, deputy head of the Geography Institute at Russian Academy of Science, said: ‘What we have been watching recently is comparatively fast changes of climate to warming, but within the framework of an overall long-term period of cooling. This is a proven scientific fact.
    ‘The recent warming – and we are talking tenths of a degree at most – is caused by human activity, like forest elimination, the changing of landscapes.
    ‘The greenhouse gases so much discussed now do not in fact play big role. We have to remember that all the impact of industrial enterprises in Russia cannot be compared with one volcano eruption on our planet.’….

    http://www.dailymail.co.uk/news/worldnews/article-1260132/Russian-weatherman-strikes-blow-climate-change-lobby-announcing-winter-Siberia-coldest-record.html

  99. Willis:

    You might want to make notice of this curiosity for future investigation.
    Notice how your graph here:

    is almost a dead match to the Arctic Oscillator Loading Pattern advanced eastward 90 degrees here:

    http://www.cpc.ncep.noaa.gov/products/precip/CWlink/daily_ao_index/loading.html

    That is a very curious coincidence, if it is a coincidence. If not, it seems to lead you to assume that one is made from the other even though I don’t see how that could be possible in nature. One is temperatures and the other is pressures if I am correct.

  100. read comments too:

    22 March: BBC Paul Hudson Blog: A breakthrough in long range forecasting?
    I am indebted to Dr Jarl Ahlbeck, from Abo Akademi University, Finland, who contacted me about his fascinating new piece of research relating to this winters severe cold across much of Europe, and a possible link to the very low solar activity we have been experiencing….
    In essence what this research shows is that there is a link between the level of solar activity, the stratosphere, and the weather patterns that we experience, and gives more weight to the idea that solar effects may influence our weather (and hence climate) more than is currently accepted or understood. ..
    You can read Dr Ahlbeck’s research paper in PDF format by clicking here [214KB PDF]

    http://www.bbc.co.uk/blogs/paulhudson/2010/03/-i-am-indebted-to.shtml

  101. Just did a quick check on the numbers for Eureka (where i used to work many years ago). Average for 51-80 as defined by GISS normal for winter months -38C
    winter 2009-10 was -36. so it was 2 degrees above normal, whop-de-du! Love the color scheme, it pushes the melt going on up there at that temp!!!

  102. ‘The greenhouse gases so much discussed now do not in fact play big role.’

    As the Russian software engineer who sat next to me said one day, while I was having an intense losing battle with Unix Script & EBSIDIC readouts, “Is not end of world. End of world is big mushroom cloud”.

    GISS is not end of the world.

  103. @Willis Eschenbach (17:25:18) :

    We’ve been over this several times. GISS does not do this now for the Arctic Ocean, for two reasons. The satellites don’t go that far north. The satellites don’t measure ice temperatures.
    The satellites cover up to 82.5 °N.

    That’s the halfway point between the very inner circle, and the 80 °N latitude circle.
    The satellites Microwave Sounding Units measure the intensity of upwelling microwave radiation from atmospheric oxygen. They do not measure “temperatures” directly – the data must be mathematically inverted to deduce the temperature of the lower troposphere. This math is rather detailed and difficult – UAH (University of Alabama in Huntsville) got it wrong for decades.

    I gave you links to detailed analysis of NOAA satellite data in the Arctic – I have no desire to become an expert on the intricacies of how summer melt ponds on top of sea ice affects satellite SST measurements by MSU’s, for instance, but perhaps you are as interested as you seem. Good luck.

    Additionally, the GISS temperature trends are given from a base period of 1951-1980. If you have satellite temperature measurements of the arctic ocean for that period, please cite them …
    They have satellite measurements from 1979. But even those have that 82.5 °N limitation that bugs purists. I myself wish they had 90 °N inclination satellites measuring the entire planet, for the last 5 decades. But 20 decades ago they were still working out the basics of geology, for comparison…

    I see you are not happy with the various methods for guesstimating Arctic temperatures in areas that are not directly measured – I guess you’ll have to throw out that entire dataset you showed me from 1875 to 1979.

  104. pat (21:05:55) :
    ‘However, during low solar activity the easterly QBO causes a considerably stronger negative AO than the westerly QBO is able to cause a positive AO. Furthermore, easterly QBO is more common than westerly QBO during the Nordic Hemisphere winter.’

    That’s an interesting concept.
    A couple of strongly negative QBO’s could knock the Northern temps down like a pile driver.

  105. Cement a friend (18:51:18) :

    However, I thought that a correlation was only significant if greater than 0.9 (or less than -0.9)

    Not true. A correlation could be significant with a very small R value. You can pick any significance (confidence) level you would like to consider whether such a small R is relevant to you or not.

    For example, imagine several thousand pairs of data points where the measured data has a huge spread around an upward sloping trend line, like measuring a very noisy signal over time. Suppose it has an R^2 value of of 0.04. The R then is 0.2. It could still be quite apparent that this signal is increasing over time, and with increasing N, it will become more and more significant. At some point it will be determined that it is unlikely such a trend could develop in the data by chance alone, and once this threshold is crossed (your confidence interval), you can be confident that it is significant (not by chance at your chosen confidence).

    The correlation R or determination R^2 describes how well it is possible to predict the next data point based on the model (in this case with increasing time). You will not be able to predict with any accuracy where the next few data points will lie due to the fact of so much unexplained variance. Your model only explains 4% of the variation. So a low R means an inability to predict the next outcome with confidence based on a given input value. But this does not mean your model is wrong, it just means you don’t have enough variables included in your model to explain the variation you see. If it turns out that your model is correct, and that it holds true beyond the measured range (dangerous territory to predict here), you will be able to predict the general direction of a collection of many outcomes based on the independent value, but not of a single value. The noise will remain until you improve your understanding / measurements / model to reduce unexplained variation.

    Where so many climate studies fall apart is that very noisy data is taken over a very short time frame, these linear trends are then projected forward to predict a future outcome (generally with the independent variable being the all-evil CO2 molecule). But the underlying data is in-fact cyclical when a longer time frame is considered, superimposed on an even warming trend since the LIA. Meaning the proposed model only worked over the time frame when the slope of natural and reasonably predictable temperature cycles happened to coincide with sharply increased CO2 emissions.

    This is no mistake. The only way to build an alarming story is to purposely ignore previous cyclical events, or “off them” as the hockey stick attempted to do. Once we have chosen to ignore obvious natural cycles, this opens the door for a higher correlation with high linear slope during the time since WWII. The really nice part is, if we only use this time, the better statistics also indicate higher predictive ability, meaning our model is much, much better (and we’re smarter too since we’ve greatly reduced unexplained variation – a real win-win situation). Well, since CO2 alone cannot be responsible for that much warming over such a short period using even the most aggressive greenhouse gas calculations, the only way to explain this is to over inflate effect of CO2 by surmising positive feedback effects that amplify the small warming effect that CO2 is presumed to have. Effects which have never been demonstrated. Now you have a model you can project into the future that can really scare people. YAAAY!

    All of this, of course, develops while researchers also fail to study natural negative feedbacks due to clouds and thunderstorms which strongly counteract warming, by whatever cause. Ignoring the past in this case is much like predicting it will be 3600° outside by September because the temperature increased 1°F from 10:00 to 11:00 this morning.

    So we’ve developed insane models based on the coincidence of a few sine waves added together plus a new variable which has little practical effect on temperature, we focused on the highest slope area we could find to improve our predictive ability and reduce noise, amplified it, then projected it as far as the eye can see. Which is why it’s falling apart before our eyes. Such a model has no chance of success in practical terms since it fails to consider the cyclical and chaotic nature of climate. In political terms, it just needs to hang on by a thread for a little longer for the anthrophobes to succeed.

    I still think this is the best demo I’ve seen yet of how dangerous the last sharp warming trend was (that ended in 1995)…

    http://wattsupwiththat.com/2009/12/12/historical-video-perspective-our-current-unprecedented-global-warming-in-the-context-of-scale/#more-14034

  106. Ah, another carefully crafted comment moderated away.

    That means I can’t comment here again for 24 hours – too bad, Willis had a good topic and some interesting points.

    Cheers.

    REPLY: you can comment on topic, just leave tobacco out of the discussion – Anthony

  107. Anu (20:46:33) :

    [snip - we won't be turning this thread into a discussion of tobacco company issues and arguments - A]

    Thank you Anthony. :-)

  108. Anu:

    Just like the tobacco company executives who were eventually found to be fighting the medical science of cigarettes-causing-lung-cancer in public, using PR firms like The Heartland Institute and a few hired-gun doctors.

    Are you sure of this? I visited the Heartland site about four months ago and got the impressions that they were only involved in the fight over cancer and secondhand smoke, not cancer and smoking itself, and that they were not a PR organization but a thinktank.

  109. I find statistics rather fascinating and often confusing, even though I have to use them on a regular basis. Like many other implements, it is just a tool that only works if applied properly. I keep going back to the following site on Inferential Statistics when I get confused:

    http://faculty.vassar.edu/lowry/webtext.html

    It gives a very understandable explanation of the concepts. Pertinent to this discussion are chapter 3 & 4 on correlation, regression and significance.

  110. Steve Koch (20:52:38) :

    Maybe the solution is to take climate science out of NASA.

    That would require taking Washington out of NASA.

    In the 60’s there was a glorious push by Washington to put a man on the Moon. NASA was an arm of Washington to accomplish that. Now there is an inglorious push by Washington for Cap N Trade from ‘global warming’. Again NASA is an arm of Washington to accomplish that. NASA goes where Washington wants it to go.

    I hate it because I we’re supposed to be seeing accomplishments as awesome as landing on the Moon coming from NASA. But if you have inglorious leaders you have the inglorious goals they set.

    All we have now is James Hansen and Gavin Schmidt. :-(

    We are like Charlie Brown on Halloween always getting a rock!

  111. pat (20:56:06) :

    24 March: UK Daily Mail: Will Stewart: Russia’s top weatherman’s blow to climate change lobby as he says winter in Siberia may be COLDEST on record

    Those poor prisoners in Siberia!! Yes, there are still prison camps there.

  112. St Paul’s dictum comes to mind:
    “As in a mirror , darkly” anomalies say something about temperature which temperature is a proxy for heat and heat content is what signifies cooling or heating.

    Anomalies are a third level proxy, and as I have said often enough, on the earth, due to the several heat transport mechanisms existing in the atmosphere and the oceans, anomalies may have as much connection to reality as a mirage in the desert to the oasis.

  113. John,
    Thanks for the link. In regards to it, and Bart’s comments below… there is one comment from VS that I think is worth highlighting because its applicability to climate science is absolutely profound (paraphrased):

    Many observed, solid microeconomic relationships do not translate to real world observations at the macro level.

    For climate science this could (assuming the data aren’t crap and VS and the other econometricians that have said the same thing are running the tests right) mean that the CO2 behavior in the lab is valid (which no one disputes anyway) but it is nothing but noise in the context of the climate.

    That is a profound thought. Personally, I would like to see what happens if VS checked trends in pavement and temperature for correlation ; )

  114. I made a simple operation. I read out the DJF-and JJA tremperatures from GISS for 75 degN and 65degS for the periode from 1980 to 2010 and also read out the sea ice coverage from here: http://arctic.atmos.uiuc.edu/cryosphere/. After this I calculated the correlation for ice-max. and min. in arctic and antarctic. here are the plots: http://www.wzforum.de/forum2/read.php?6,1852561.
    The result: The arctic sea ice coverage has a good correlation to GISS temps anomalies (R=0.7). On the other side in the antarctic there is not a correlation at all, either for Ice-max in September nor for ice-min in march to the GISS average temperatures of the 3 month before. That means, the ice coverage has no link to temps?? That can’t be true! So I would mean, that the giss temps are some kind of okay for the arctic, for the antarctic not at all. Look to the temperature-record for antarctic.

  115. Willis Eschenbach (17:03:37) :

    JAE (16:06:04)

    ?? I don’t get it. Fig. 1 shows the anomaly, not trends. Isn’t the problem simply that the 1200 km “weighting” is not representative?

    Jae, you are correct. However, at the moment, the anomaly and the trends are quite similar, and I couldn’t find a GISS map of the trends. Here’s the closest I could find, zonal trends by month. I didn’t know if most folks would be able to interpret it, so I showed the anomaly map.

    Willis you can make Tren d maps from the same map maker utility at this like:

    http://data.giss.nasa.gov/gistemp/maps/

    Where it says map type it is default to anomaly but click on it and it has the trend option. Here is an example that hopefully lasts long enough:

    http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=2&sat=4&sst=0&type=trends&mean_gen=1203&year1=1881&year2=2010&base1=1951&base2=1980&radius=1200&pol=reg

    Sometimes you have to adjust the time intervals to get the cranky Gistemp program to work :)

  116. It’s always Marcia, Marcia (18:33:55)

    This is unfair and everyone should know this is being done.

    If James Hansen feels he is so correct to do this with the data then he himself should be proud to tell the world about his method.

    Everyone should be made aware that ‘global warming’ is being manufactured! And manufactured by someone working at ‘NASA’ no less!

    The fact that GISTEMP extrapolates out to 1200 km has never been hidden. The fact that the correlation at 1200 km (or any other distance) means nothing about the trends has not been considered, but that also means it hasn’t been hidden either.

  117. James Hansen said he would campaign to unseat several members of Congress who have a poor climate change voting record, and in the same breath says:
    “The problem is not political will, it’s … the lobbyists. It’s the fact that money talks in Washington, and that democracy is not working the way it’s intended to work.”

    http://www.environmentalleader.com/2008/06/24/james-hansen-try-fossil-fuel-ceos-for-high-crimes-against-humanity/

    ??????
    Call me stupid, but I thought that democracy was the right to vote whatever you want to vote, not what mr. Hansen wants you to vote.
    What is the matter with these people?
    It is the best example of the pig-farm I have seen in years….all animals are equal, it’s just that some animals are more equal than others.

  118. vigilantfish (18:00:39)

    Very informative, Willis – loved your gentle instruction on the meaning of correlation and love your posts as always. Just one thing: I got hung up on the caption for figure 3 that describes the Arctic Ocean as being roughly three times the size of Alaska so I checked out the actual areas: the Arctic Ocean is 14.056 Million square km vs Alaska’s area of 1.718 square km so the Arctic Ocean is more than 7 times the size of Alaska. Of course this strengthens your argument regarding the serious problem with the extrapolation of a few data points.

    You are correct, I was moving too fast. What I meant was the area north of 80° North is three times the area of Alaska. I’ve fixed it in the post.

    Thanks,

    w.

  119. Ian L. McQueen (18:35:19)

    I’m just beginning to read the comments, so the following matter may already have been covered, but….. In the following sentence, I believe that the average should be 0.75, not .075.

    “Glad you asked … here are nineteen fifty-year long temperature datasets from Alaska. All of them have a correlation with Anchorage greater than 0.5 (max 0.94, min 0.51, avg .075)”

    IanM

    Correct, fixed, thanks.

  120. jorgekafkazar (19:55:13)

    I’m still not sure I understand what GISS actually did. Where did they establish their r’s? Surely they didn’t get their correlation coefficients using data anywhere but within the Arctic circle? A distance of 1200 km is huge, in terms of climate variability, especially if you’re extrapolating. Worse, if GISS is extrapolating pole-wards and using linear fits, they’re really on thin ice. The pole is a geographic and climactic discontinuity. There’s little reason to expect linearity unless Hansen derived his fit using actual data at the pole. If not, then I’d say Dr. Hansen is just guessing and doesn’t know his r’s.

    Jorge, you should read the original Hansen paper. He details all the steps there. I agree that extrapolation over 1200 km doesn’t make sense. But worse than that is that extrapolating trends based on correlation doesn’t make mathematical sense.

  121. I think we should be counting up how many different ways the climate science community are going to redisscover the EXTREMELY well know concepts of:

    stationarity and cointegtration.

  122. Willis Eschenbach (16:36:18) :
    This has nothing to do with how mining companies infill missing data. They generally use kriging, which is very, very different both conceptually and in practice
    Kriging is a distance-weighted average. GIStemp is a distance-weighted average. How exactly are they conceptually very different?

    Your Fairbanks example does not constitute a test of the method. First there is a cherry picking problem – you have chosen the site for which you think the method will perform worst. Second you have not demonstrated that the reconstruction has no skill, just that it is not perfect. The RE statistics is probably a good test of skill. Third, there is a large variance when considering a single case.

  123. The first thing you learn when interpolating (or whatever) data is that the software you use has a blanking function. Otherwise people will soon ask you why you show values where there are no data.

  124. Willis Eschenbach (02:07:17)

    Andrew P. (18:37:01)

    Why can’t [they use] the data from the IABP bouys? http://psc.apl.washington.edu/northpole/ there’s always one not far from the pole every time I look…

    Very good question, I fear I have no answer. Seems like a natural to me. Anyone?

    I would think the fairly constant movement of the buoys would make it difficult to use them to attempt to derive temperature trends. The NASA North Pole webcam installations have generally moved from their placement points close to the Pole to around Lat 70 off the east coast of Greenland over their reporting lives which have been mostly far less than a year. Still there are generally several dozen buoys scattered about the Arctic at most times and collectively they ought to provide better temp approximations than infilling from 1200 kms away.

  125. John Whitman (22:40:28) : edit

    Mods / Anthony,

    In case someone hasn’t pointed out to you about the newspaper story on the VS comment stream over on Bart’s blog.

    http://www.examiner.com/x-9111-Environmental-Policy-Examiner~y2010m3d24-Global-warming-Bigger-than-Climategate-more-important-than-Copenhagenits-statistical-analysis

    Based on my many hours spent looking at the VS stream, the article is amazingly accurate.

    John

    Thanks, John. The post on Bart’s site is an agonizingly long read, but worth it, a very interesting, instructive, and important one. You’ll need a good understanding of statistics to follow it all, but the points that VS makes are critical to understanding the math of climate datasets. The short version is that everyone has been doing it wrong. The slightly longer version is that there is nothing abnormal or unusual about the recent warming. The long version is over 800 replies to the original article … if you want to jump to some of the meat, go here.

    w.

  126. How does this knuckle-head keep a job? You pull a stunt like this in the real world and the HR people tell you to go home and come back Saturday to get your stuff. As many times as Hansen has been caught fudging his “work” you’d think that he’d have been shown the door a long time ago.

    By the bye, where’s the outrage? Hansen thinks that I should be put on trial for crimes against humanity and that’s perfectly acceptable. I think that Hansen should be fired for attempting to perpertrate a proven fraud against humanity and that’s hate speech. Is it 1984?

  127. Anomoly of -6Celsius in Northern Russia and +6Celsius in the Artic next door. Hmmmm, something a little odd about that….

  128. anna v (22:29:49) : edit

    St Paul’s dictum comes to mind:
    “As in a mirror , darkly” anomalies say something about temperature which temperature is a proxy for heat and heat content is what signifies cooling or heating.

    Anomalies are a third level proxy, and as I have said often enough, on the earth, due to the several heat transport mechanisms existing in the atmosphere and the oceans, anomalies may have as much connection to reality as a mirage in the desert to the oasis.

    anna v, while I agree with you that heat content is a much better metric of global warming and cooling than temperature, anomalies are getting a bad rap on this thread.

    We use anomalies all of the time, without noticing it. Why? Because the base of the anomalies is zero, which seems “natural” to us. Consider two temperatures, say 20° Celsius (C), and 293.15 Kelvins (K). In fact, these two are exactly the same temperature.

    The only difference between the two is that they are anomalies from a different baseline. The Kelvin scale has a baseline of absolute zero, and the Celsius scale has a baseline of 273.15K (the freezing point of water).

    So both of these are anomalies. We could express the exact same temperature a third way, as an anomaly with a baseline of say 15°C, in which case the anomaly would be +5 degrees C (or five degrees warmer than 15°C).

    In other words, an anomaly is just like a regular temperature, it just has a different baseline than the triple point of water or than absolute zero. This does not make it somehow second rate or not useful.

  129. Help!!
    I am not a warmist and I really want to understand this. Looking at Figure 4, in what meaningful sense is the steepest and most curvy purple Ps 5 (purple) deemed to have a better ‘correlation’ (0.97) to the almost flat-line red Ps 2, than does the in-between orange Ps 3 (0.91)? Genuine assistance will be appreciated.

  130. On a lighter note (but with a more serious undertone) and by amazing coincidence, I’ve just posted a satirical article on “The Spoof” titled “New Study Suggests News Climate Is Warming” and containing the following:

    When asked what he was currently working on, Dr. Hansen outlined his past work on eliminating the Inconvenient Warm Period – “What did the Vikings know? They couldn’t even write, let alone submit an article to a mainstream journal. I’ve just proved that the last decade was the warmest since NOAA’s flood, and that the Arctic saw record temperatures, using interpolated gridded data based on minimal evidence. I’m currently working on LIAR – that’s Little Ice Age Redaction, and I’ll show you my work in progress” he said, knocking over a bottle of correcting fluid as he reached for a large chart.

    I’ve quoted Anthony’s name as “the renowned skeptical blogger” – I hope he doesn’t mind.

    Article is here: http://www.thespoof.com/news/spoof.cfm?headline=s5i71502

  131. They didn’t want to use my thermometer readings because I had it in the fridge, once I moved it to the oven NASA where more than happy to use my readings.

  132. What do the Russians and Canadians say about Herr Hansen’s Crystal Ball extrapolations?

    NASA is no different than any other large, old organization. They all tend to get weaker and die of cancer. Their current Administrator wants to put them on a NEW and much less demanding track; one up to their current and projected capabilities.

    When the US no longer has a AAA Credit Rating, should we drop the N from NASA, NOAA, NSA, NCIS, NCAA, etc.?

    (I know; NCIS was a joke:-)

    PS: Half serious now, I wonder if most of the ice at the poles isn’t mixed with a lot of plastic. Perhaps that’s why is doesn’t seem to be any good any more.

  133. Re curiousgeorge @ 50 I was in the RAF @ marham during the 80s/90s as a tecky on the victor air to air refuellers. We had to have a jet or two on standby to refuel the tornado ac on intercept flights. I was on standby in case of a scramble during easter one year, easter was a time when the russians flirted regularly with uk air space, I was called out on good friday evening and we scrambled 2 jets, our job was to service,repair and make ready for any other sorties. Anyway we got very little sleep as we were contiually sending refuellers to help the fast jets. Over the course of the weekend we ran out of servicable jets and had to borrow from another squadron also at marham.(We didn’t have many jets perhaps 15 per sqn). Each jet carrying about 120k lbs of fuel. w
    So its not just the fast jets its the refuellers too

  134. Aren’t the anomalies based on 1951-1980 data? This would mean using satellite data would compare to a period that did not exist in the satellite record.

  135. Interesting that the cold anomolies link the developed countries and the warmest anomolies link the places where either nobody lives or primitive peoples.

  136. This should be reasonably easy to prove.

    It would be a publishable paper and could cause Hansen to abandon the 1200 km smoothing.

    Hansen and Lebedeff 1987 did look at the issue and showed the correlation falls off to around 0.5 at 1200 kms for the Arctic (and around 0.4 to 0.3 for the southern hemisphere).

    But they were using data only up to 1985. With more years to look at and the idea that longer-term trends should also be looked at more closely, one could show the 1200 km smoothing algorithm is not “robust” (especially in the time period after 1985 when the records were “adjusted” more).

  137. It seems to me that the ChiefIO, Anthony, Willis, and Pielke Sr. could write one hell of a paper on the surface instrumental temp record.

  138. May I ask a question about the baseline used in the GISS anomaly graphs?

    Several commenters have mentioned that the selected period is unusually cool, rendering most comparisons “warming.”

    We stammering idiots working in business management usually undertake sensitivity analyses before relying on the outputs of models to make consequential decisions. One of the ways we would check the robustness of the warming conclusion is to randomly vary the baseline period (length and window), and see how often the same result is reached.

    My question: has anyone actually done this? If so, where are the results reported? If not, why not?

    Thanks for any assistance.

  139. @ anu: I am curious to hear more about the point you may be making about tobacco companies compared to fossil fuel companies. Where should I go to see this articulated in some detail? [just leave a comment at my site]
    Thanks

  140. Dave Springer (16:46:32) :
    “I think Pielke is right on the money in this article:

    http://wattsupwiththat.com/2009/08/21/soot-and-the-arctic-ice-%E2%80%93-a-win-win-policy-based-on-chinese-coal-fired-power-plants%E2%80%9D/

    No source of black soot can make it to the south pole which handily explains why the antarctic interior isn’t warming.”

    An easier explanation. The runways for the airports in the antarctic are either ice or snow.

    The runways for the airports in Greenland and Alaska are generally paved and plowed.

  141. Willis Eschenbach (17:40:17) :

    “Missing. Not “average global anomaly”. Missing.”

    What does it look like, if you plot the Artic temps using a smaller grid (250 km, say), leaving the missing areas grey?

    (Sorry if I missed something here).

  142. I looked into how GISS creates data when there is no data
    Perhaps this surprises many people, but anybody who has worked in a state owned institution/corporation knows this currently happends in state owned companies, this is a common practice as there are no personal responsabilities and wages/salaries are asigned and given without any relation to productivity or proficiency.
    This has happened all over the world whereever and whenever a state owned corporation has worked. All, without exception, become broke. It doesn’t matter the people, it’s the system, where there is no a personal, individual commitment things don’t go anyway.
    Need a nearest from you example: Just compare the amount spent in the research and sub orbital flight of the X prize winner ship, with the smallest research at NASA. I am not american and I do not write from the US by I can bet you that you can find, as an example, one thousand dollars 1/2 inch screwbolts.
    So no wonder Mr.”Coal trains” Hansen invents data…I am sure he even does not have any remorse feeling abou it. Just dig all over the world and you will find thousands of examples. And it has nothing to do with ideology, it’s just reality.

  143. (“Correlation” is a mathematical measure of the similarity of two datasets
    That’s the same as throwing two deck of cards on a table and see the coincidences among them. That was called SINCRONICITY by the psychatrist C.G.Jung

    http://en.wikipedia.org/wiki/Synchronicity

    and it was VERY POPULAR among NEW AGE fans in the 1960’s.

  144. Willis,

    Thank you for continuing to explain to people that Anomalies are not the enemy:

    “In other words, an anomaly is just like a regular temperature, it just has a different baseline than the triple point of water or than absolute zero. This does not make it somehow second rate or not useful.”

  145. Willis Eschenbach (02:07:17) :
    Andrew P. (18:37:01)

    Why can’t [they use] the data from the IABP bouys? http://psc.apl.washington.edu/northpole/ there’s always one not far from the pole every time I look…

    Very good question, I fear I have no answer. Seems like a natural to me. Anyone?”

    Willis,

    I know gavin has commented on this at RC. Somebody would have to look for the precise comment, I can’t recall the thread..It should be in one of his replies to somebodies comment

  146. Re: Willis Eschenbach (Mar 26 03:22),

    We use anomalies all of the time, without noticing it. Why? Because the base of the anomalies is zero, which seems “natural” to us. Consider two temperatures, say 20° Celsius (C), and 293.15 Kelvins (K). In fact, these two are exactly the same temperature.

    The only difference between the two is that they are anomalies from a different baseline. The Kelvin scale has a baseline of absolute zero, and the Celsius scale has a baseline of 273.15K (the freezing point of water).

    Wrong example.

    All temperature scales differ by specific mathematic formulas, because they all depend on the hard definition of : the boiling point of water at 1 atmosphere, and ice/water temperature at the triple point again at 1 atmosphere. It does not matter what sort of scale you use, it is a physical measure and each measurement can be derived from the others.

    Anomalies as used by by climatologists work this way:

    Each thermometer in different locations gives a temperature, a time interval is defined and for each thermometer the average of that is taken as a basis and the time variation of the temperature minus this average is called an anomaly.

    The hypothesis is that given one thermometer on top of a mountain and the other in the valley and the next by the sea, the temperatures will be different, but the variation from the average will be correlated and the same.

    This presupposes that there is only one heat source that creates these temperatures. When there are more than one heat source/sink as is the case with climate where there is convection, evaporation, precipitation etc, heat/energy can be going every which way and the thermometers will be following a different drummer.

    I have given the following gedanken experiment. Tell me where it is wrong:

    Take an insulated room and put a heat source in one corner with time varying output, and several thermometers all over the place. Take the first ten measurements for each thermometer and call it the average. From then on compute anomalies. Yes, they will be correlated reflecting the variations of the source.

    Then put a randomly moving heat source in the room ( on one of those robots that change direction when they hit a wall) and do the same. i.e take the first ten measurements of each thermometer as the average and use that average from then on to compute an anomaly.

    My claim is that the the anomaly of these thermometers will not be correlated with either source in a detectable form. They will be plus or minus depending on the vicinity of the moving heat source.

    This is the situation in the atmosphere and the oceans, where huge masses of water and air are moved around carrying energy and modifying with various mechanisms the heat content of the atmosphere where the anomalies are measured at 2 meters.

    Tell me where the gedanken experiment is wrong.

    In other words, an anomaly is just like a regular temperature, it just has a different baseline than the triple point of water or than absolute zero. This does not make it somehow second rate or not useful.

    No, it is not. It is taken over a varying baseline and when there are more than one heat sources/sinks, it makes no physical sense.

  147. Willis, I wonder if arctic UHI effects have been included in these calculations. Per the Barrow, Alaska study, an admittedly rural town (4600) showed up with an average of 3C effect within a 150 KM circle. http://www.geography.uc.edu/~kenhinke/uhi/

    Seems the effect was mostly in the winter and corresponding to power/gas inputs to the town. How many of the other sites could have the same problem?

  148. The situation vis-a-vis the Arctic temp history is far worse than some have depicted, what with exrapolations to 1500 km.
    The extrapolations, as bad as they are, are based on raw station data that commonly sports great gaps of multiple years/months.
    It’s an Empire State Theory built on a mudflat with sinkholes.

    Let me make it perfectly clear: The sub-Arctic land station data is bad enough as it is.

  149. Well a lot of babies are born in the spring time; which in the case of humans is nine months after the winter +/- a few months. It’s actually the same for other species; but who cares.
    Since winters tend to be colder than Spring, does this prove that cold causes pregnancy ?

    I would commend to the reader’s attention; a classic “Farside” Cartoon of Gary Larson. Which I can’t show you here because it is copyrighted; and there is NO legal way for anyone to show, use, or display ANY Farside cartoon; but you can buy any of them for your own personal use. So ANY public display of any Farside cartoon is prima facie evidence of copyright infringement.

    So the panel in question, is officially titled “What’s in my Front Yard ?” The lady on the phone is asking her neighbor across the street to describe what is in her front yard. She can’t determine that herself, because her entire living room front window is completely blocked by a ginormous eye.

    To me that cartoon panel is a perfect demonstration of violation of the Nyquist Sampling Theorem. It’s the same principle as the blind man feeling around an elephant, and trying to determine what it is; having never seen or even heard of such a creature.

    Comparing one thermometer with itself at some different time, isn’t going to yield an accurate value for the mean global temperature of the earth.

    When was the last time you actually saw on ANY official weather station report; the total surface area to which that specific location temperature or anomaly actually applies.

    You can’t get a global average from a set of data points; none of which have an attached area element which applies to that temperature (or anomaly).

    And even if you could determine the correct global mean surface temperature (you can’t), it has absolutely no scientifc meaning at all, since thermal processes all over the world are not uniquely characterized by the local temperature.

    Arid tropical deserts react differently from ocean , or from arboreal forests.

    So mean global surface (or lower troposphere) temperature has about as much scientific meaning as the mean of all the telephone numbers in the Manhattan phone directory; namely, none at all. Unless of course it happens to be your phone number, in which case you will get a lot of calls.

    Well it’s like tornado frequency; hardly relevent; unless it happens to be your house that gets hit.

  150. Quiz: If we have three different summers:

    O1: Max.Temps.Avg= 27°C
    O2: Max.Temp.Avg=30°C
    O3: Max.Temp.Avg.=38°C

    Which one was the hotter?
    It was the first one, as the avg.minimum was 22°
    So temperatures mean nothing.

  151. Still no comment from E.M. Smith?

    I checked The Chiefio’s site, he has a post dated yesterday showing his excellent work doing a detailed regional analysis of the temperature record and showing the interesting revealed trends. Brilliant stuff that should well be highlighted here on WUWT (IMHO). He’s currently waist-deep in it and appears very busy finishing it up.

    Could someone please stop ’round his place and make sure he’s alright? With all the great work he’s currently doing, I’m worried he might have collapsed from exhaustion!

  152. anna v (10:02:12)

    … I have given the following gedanken experiment. Tell me where it is wrong:

    Aaaah, you have found my secret weakness, I love thought experiments.

    Take an insulated room and put a heat source in one corner with time varying output, and several thermometers all over the place. Take the first ten measurements for each thermometer and call it the average. From then on compute anomalies. Yes, they will be correlated reflecting the variations of the source.

    Then put a randomly moving heat source in the room ( on one of those robots that change direction when they hit a wall) and do the same. i.e take the first ten measurements of each thermometer as the average and use that average from then on to compute an anomaly.

    My claim is that the the anomaly of these thermometers will not be correlated with either source in a detectable form. They will be plus or minus depending on the vicinity of the moving heat source.

    This is the situation in the atmosphere and the oceans, where huge masses of water and air are moved around carrying energy and modifying with various mechanisms the heat content of the atmosphere where the anomalies are measured at 2 meters.

    We’re not trying to determine the correlation of the temperature in some town in Colorado with a moving bunch of warm air caused by an El Nino. We can’t do that without additional information, specifically, the distance of the heat source from the thermometer.

    Instead, we’re (generally) interested in the temperature in that town, and how it has varied over time (trend, gaussian average, difference with other local town temperatures, correlation with a local proxy, etc.).

    Even in your example, there is no difference between using each temperature’s average of the moving heat source as a baseline, and using the freezing point of water (or absolute zero) as your baseline. You do not lose or gain any information by changing the baseline. A temperature in K is just as informative as a temperature in C. There is no difference in the information content of

    f(x)

    and

    f(x) -3

    Suppose, for example, two of your thermometers in your thought experiment measured in K, and two of them measured in C … despite the fact that they have different baselines, would you have less information than if they all measured in C?

  153. George E. Smith (10:51:53) : edit

    Well a lot of babies are born in the spring time; which in the case of humans is nine months after the winter +/- a few months. It’s actually the same for other species; but who cares.

    Where I live, the fall (autumn) comes nine months after the winter, and spring comes either three or fifteen months after the winter … where do you live?

  154. Willis Eschenbach, excellent work.

    If not an outright debunking of Hansen’s assumptions, it comes very close.

    It would seem that you can’t take any of the pro-AGW scientist’s work at face value — verify all work product — because, more likely than not, there is a probability it will be questionable, either to methodology, assumptions, or inferences.

  155. DD More (10:20:11)

    Willis, I wonder if arctic UHI effects have been included in these calculations. Per the Barrow, Alaska study, an admittedly rural town (4600) showed up with an average of 3C effect within a 150 KM circle. http://www.geography.uc.edu/~kenhinke/uhi/

    Seems the effect was mostly in the winter and corresponding to power/gas inputs to the town. How many of the other sites could have the same problem?

    UHI effects are definitely present in Arctic temperature records. In particular, the colder the temperatures, the more effect that there is from the burning of fuel to heat houses and other structures. If the average temperature is 70°F, keeping your house at 70°F doesn’t affect the temperature. But if the average temperature is -10F, the heat from houses, buildings, and things like car exhausts can be a significant effect.

  156. Jimbo (18:15:21) :
    “Quick sip of Champaign and back to work!”

    Cannot say I agree, Jimbo. I would love to, but here in Norway it works a bit different.

    They have already implemented CO2 taxation. And in Norway, if a tax is implemented, it will never go away again.

    People just live on. The government grows, and grows. New socio-xxx “students” are pouring out of the Universities every year. Post normal types. And they need work. In government.
    And the more of them you get, the more votes for a larger government. An evil spiral, downwards.

    And we have to pay tax to feed these types.

  157. Willis Eschenbach (02:11:19) :

    The correlation at 1200km was average of .5 for northern latitudes.
    The spread was something like .2 to .8.

    With GISS code up an running EMS could actual do a sensitivity on this parameter. Or god forbid you could actually make it a function of the demonstrated correlation for the region. Clearly for some regions you have good correlations at long distances and for others you have crappy figures.

    And clearly if you are blindly mushing together an inland site with one on the coast ( which is more correlated with the changes in SST ) you will end up with mush. and if you blend over a region that changes from Ocean to Ice depending on the season you are also asking for trouble.

    Hansen’s method is just a meat grinder approach.

  158. “”” Willis Eschenbach (12:52:29) :

    George E. Smith (10:51:53) : edit

    Well a lot of babies are born in the spring time; which in the case of humans is nine months after the winter +/- a few months. It’s actually the same for other species; but who cares.

    Where I live, the fall (autumn) comes nine months after the winter, and spring comes either three or fifteen months after the winter … where do you live? “””

    Well Willis I live in California; where it was Spring last week and this week and for a while to come it will be Summer. You might have noticed I said +/- a few months; I was applying the obligatory climate science fudge factor. By that standard, my estimations are quite good.

    George

  159. Phil. (20:36:29) :

    Keith G (18:32:14) :

    “In that case where do you suppose the data for the DMI Arctic temperature linked to on this site come from? Willis shows 3 stations on the edge of the 80ºN parallel”

    Re DMI: Three stations are likely to give fair results on such an icy plain with no topographic or cultural confounders but still this would give a maximum temperature on average I would think given that there may not have been a correction for latitude. Certainly DMI is more reliable than 1200km extrapolations from land, with icy and watery coasts. Also, although the ring of stations, which are all near the same latitude 70-80 might be expected to have significant correlations laterally but not in the N-S direction. For example, 1200km south of Longyearbyen, Svalbard is Reykjavik Iceland and even the Baltic Sea, Both significantly different in temperatures and in different weather regimes- on opposite sides of the jet stream for many months and the effects of the Gulf stream and return Arctic currents. Anyone know what adjustments are made with the data in terms of latitude?

  160. “”” rbateman (10:21:39) :

    The situation vis-a-vis the Arctic temp history is far worse than some have depicted, what with exrapolations to 1500 km.
    The extrapolations, as bad as they are, are based on raw station data that commonly sports great gaps of multiple years/months.
    It’s an Empire State Theory built on a mudflat with sinkholes.

    Let me make it perfectly clear: The sub-Arctic land station data is bad enough as it is. “””

    My understanding is that about 150 years ago the number of “weather stations” in the arctic >+60 deg N. was about 12. That number increased up to around 86 in the early-mid 20th century, and then declined to something about 72 today (approximately), and possibly due to the collapse of the Soviet Union.

    Good luck on getting a believable temperature map from that.

  161. I previously asked a question regarding Figure 4. In what meaningful sense is the steepest and most curvy purple Ps 5 deemed to have a better ‘correlation’ (0.97) to the almost flat-line red Ps 2, than does the in-between orange Ps 3 (0.91)?

    What the numbers are saying is that the steep purple Ps 5 better represents the flat-line red Ps 2 than does the less steep orange Ps 3. In what sense is this so, if the statistical meaning for ‘correlation’ is to have any correlation to the usual definition?

    Specifically, what I am asking, is what there is about Ps 5 that gives it a better technical ‘correlation’ to Ps 2.

    Please realise that I am not rebutting; I am asking. I seriously doubt that I am the only one who wouldn’t understand this and without that understanding, Figure 4 is unhelpful.

  162. Very interesting. Excellent detective work. You clearly must be thrown in jail.

    This kind of ‘correlation’ has been used, on another level, to link virtually any and all effect or disaster to The Warming in the ongoing propaganda campaign so this is somehow not surprising…

    Maybe there should be a new term for the ‘data’ that the IPCC gang uses to distinguish it from that which is credible?

    Mann-made? Hansenized?

    Oh well. No worries. The Caitlin Expedition will soon return from their historic quest for the truth and then we’ll all know what’s up with that.

  163. Anu (21:36:11)

    … I gave you links to detailed analysis of NOAA satellite data in the Arctic – I have no desire to become an expert on the intricacies of how summer melt ponds on top of sea ice affects satellite SST measurements by MSU’s, for instance, but perhaps you are as interested as you seem. Good luck.

    Thanks for the good luck wishes, Anu. I got some time so I took a look at the paper on NOAA satellite data you cite above. Here’s the problem. They are looking at sea surface temperature, and GISS is looking at surface air temperature. Usually, they are quite similar … but not always. Here’s how NOAA adjust SST for ice coverage:

    Sea ice impact on the SST analysis

    In SR05 sea ice concentration of 0.6 (60%) and above, are used to linearly damp the analyzed SSTs toward the freezing temperature of seawater, -1.8°C, for concentrations between 0.6 and 0.9. Concentrations below 0.6 have no effect on the SST, and above 0.9 the SST is set to the freezing temperature.

    So in locations where there is more than 90% ice, the SST is quite reasonably set to the freezing temperature of sea water, -1.8°C. That’s fine.

    But above the ice, the freezing polar winds can be twenty, thirty, forty degrees below zero … and the SST doesn’t reflect that reality, which is the one we are trying to get at. So the satellites don’t help us in the slightest where there is ice.

    w.

  164. Sean McHugh (15:13:38)

    I previously asked a question regarding Figure 4. In what meaningful sense is the steepest and most curvy purple Ps 5 deemed to have a better ‘correlation’ (0.97) to the almost flat-line red Ps 2, than does the in-between orange Ps 3 (0.91)?

    What the numbers are saying is that the steep purple Ps 5 better represents the flat-line red Ps 2 than does the less steep orange Ps 3. In what sense is this so, if the statistical meaning for ‘correlation’ is to have any correlation to the usual definition?

    Specifically, what I am asking, is what there is about Ps 5 that gives it a better technical ‘correlation’ to Ps 2.

    Please realise that I am not rebutting; I am asking. I seriously doubt that I am the only one who wouldn’t understand this and without that understanding, Figure 4 is unhelpful.

    Sorry for missing your question above, Sean.

    Mathematical correlation is defined as:

    where x and y are the individual datapoints, and x and y with the overbar are the average of all x and the average of all y.

    What this does is measures how much the variations of x from the average of x are like the corresponding y variations.

    Note that this does not include any information about the trend. For example, two straight lines like Ps 1 and Ps 2 have a correlation of 1.0, despite their wide differences in trend.

    So in response to your question about how a dataset with a high trend can have a higher correlation to dataset X than a dataset with a low trend, the answer is that trend is not a part of the mathematical calculation of correlation.

    Now, this makes sense how?

    Well, suppose every time x goes up by one, y goes up by three, and every time x goes down by one, y goes down by three. Because of this, the correlation will be 1.0, because every twitch in x has an exactly corresponding (but larger) twitch in y … but the trends will be very different.

    And this is exactly the problem that I am highlighting, which is that correlation means nothing about trends, so we can’t use it to extrapolate temperature trends out 1200 km, or any number of kilometres for that matter.

    Hope this helps, but if not, ask more questions. The only foolish questions are the ones you don’t ask.

  165. Hope this helps, but if not, ask more questions.

    It helped tremendously. Thank you for putting so much effort into assisting. I did check out the maths before asking, but even though I could understand the equation, it didn’t give me a conceptual handle. This is the statement that really did the trick:

    Well, suppose every time x goes up by one, y goes up by three, and every time x goes down by one, y goes down by three. Because of this, the correlation will be 1.0, because every twitch in x has an exactly corresponding (but larger) twitch in y … but the trends will be very different.

    That completely explains to me the twist in Figure 4. I will now read the whole thing again. Thanks again.

  166. Willis,

    We are cosntantly reminded how good an insulator ice is. So it is no wonder that satellite measurments of atmospheric data above the ice give little information on SSTs as you point out.

    Certainly for areas of the arctic where sea ice coverage is more or less total, that would seem to be a given.

  167. Willis Eschenbach (16:16:58) :

    If I have read you right, extrapolating from Station X1 on land out 600 km to the Polar region will impose a trend, but it won’t give you the actual temperature there. If there is no Station X2 on the Ice Cap, one really doesn’t know that the trends actually correlate, and neither does one have an anomaly.
    What you do have is a WAG.

  168. Why not infill using the satellite data?

    We’d get a warm trend in the Arctic greater than any other large-scale region on the planet.

    North Pole trend – 0.45/dec.

    UAH data

    Their figure roughly corroborates Hansen’s – even though satellite coverage is also poor for the region. I request a post on why we shouldn’t trust Spencer and Christy.

  169. Anu (22:01:02) :

    Ah, another carefully crafted comment moderated away.

    ……………………………………………………………………………………………………..

    It was only your line of comment on cigarette smoke that was snipped. Anyone could see that. You comments related to this thread are still there. Even though you do not agree with the writer of this thread you were not deleted for it.

    This would not be the case at RealClimate. Comments that are not in agreement with the writers there are customarily deleted even though they are on topic.

  170. Anu (22:01:02) :

    Ah, another carefully crafted comment moderated away.

    ………………………………………………………………………………………………………

    Anu,

    I am sure someone in your position does have to carefully craft what they say. That’s all I’ll say about that.

  171. Re: Willis Eschenbach (Mar 26 12:50),

    Of course for a locality the average, and the changes over the average have a meaning from cultivation to what to wear to tourism.

    In my example thermometer 1 will give an average T1 because the heat source passed close to it more often than for thermometer 2 with average T2. Then the anomaly of thermo1, say falls because the moving heat source is far away from the average distance it had when the average was taken, while the anomaly of thermo2 goes up because the moving heat source came closer than when the average was taken.

    There is no analogy between changing scales in thermometers and using local average temperatures as a base on which to measure temperature differences. The average of each thermometer will be different depending on the energy sources affecting it. When one starts looking at the differences over these variable averages, in different locations with different energy source variations, one is diluting the proxy process to an undecipherable level.

    It is the scientific meaning I am after, whether anomalies are proxies for global heating and cooling. You are talking in the post of the way the anomaly averaging is done, I am talking that already from the logical basis, anomalies are very bad proxies of heating and cooling of the globe, by construction.

    If there were only one energy source, e.g. radiation from the sun that the static energy budget models use, anomalies would reflect temperature variations which would reflect ground temperature variations. Fortuitously, there are oceans and an air atmosphere that generates large movements of energy over the globe that distort and may reverse what anomalies are measured with respect to the ground temperatures.

    The recent winter is a good example because the motion of the air currents has brought cold to the south of the arctic and brought warmth to the arctic, but the storm systems distributed and negated( precipitation) the cold while the north kept a large part of the warmth so the total anomalies average out unreal as far as heat content and human perception goes. We feel heat and cold, not anomalies.

  172. Re: Willis Eschenbach (Mar 26 15:53),

    So in locations where there is more than 90% ice, the SST is quite reasonably set to the freezing temperature of sea water, -1.8°C. That’s fine.

    But above the ice, the freezing polar winds can be twenty, thirty, forty degrees below zero … and the SST doesn’t reflect that reality, which is the one we are trying to get at. So the satellites don’t help us in the slightest where there is ice.

    This harks back to another bone of mine, that it is surface ground temperatures that are important for a radiation budget using black body formulae.
    SST is the reality as far as energy balance goes. Air surface temperatures at 2 meters used for the radiation budget in this case would give tens of watts off the true radiation budget of the solid.

  173. barry (18:39:00)

    Why not infill using the satellite data?

    We’d get a warm trend in the Arctic greater than any other large-scale region on the planet.

    North Pole trend – 0.45/dec.

    UAH data

    Their figure roughly corroborates Hansen’s – even though satellite coverage is also poor for the region. I request a post on why we shouldn’t trust Spencer and Christy.

    Ummm … no. Here’s a look at the difference in GISS (ground stations) and UAH (satellite) trends, from here:

    Don’t like the UAH satellite analysis? You prefer RSS? Here’s that:

    As you can see, the biggest difference is in the far north, where GISS shows much greater warming than either UAH or RSS … in other words, the region that we are discussing.

    And you “request a post on why we shouldn’t trust Spencer and Christy.”? Here’s how it works. I post on what I find interesting, I don’t have enough time to post or research everything I’d like to, and so I depend on AGW supporters covering what they want to see covered. Go for it, let us know what you find.

  174. Willis, I’m not disputing there’s a difference. I said that UAH show a large warming trend over the Arctic – larger than any other region on Earth, just as GISS do. And I said that in response to the question upthread, which I quoted, not to your article.

    I am also aware that:

    1) The UAH record and record-keepers are held in some esteem here, while surface records and their producers are routinely maligned. The UAH record is seen as a reference point with which to check the validity of surface records.

    2) Temp-recording satellites are not able to cover the poles adequately.

    The GISS record is here rebutted because of insufficient coverage and assumptions re correlation and trends. With these things in mind, and while I was answering the question upthread, I added my request to make a small point.

    As many people contribute articles here, my request was not addressed to you specifically. The point behind my request is that skepticism is not equally applied.

    While there has been talk of it in comments sections, I may have missed the WUWT article in the past where Spencer and Christy have been taken to task for producing polar temp records – anomalies and trends – when there is so little satellite coverage for the poles. If I am mistaken, I would appreciate a pointer (from anyone).

  175. anna v (22:48:11) : edit

    Re: Willis Eschenbach (Mar 26 15:53),

    So in locations where there is more than 90% ice, the SST is quite reasonably set to the freezing temperature of sea water, -1.8°C. That’s fine.

    But above the ice, the freezing polar winds can be twenty, thirty, forty degrees below zero … and the SST doesn’t reflect that reality, which is the one we are trying to get at. So the satellites don’t help us in the slightest where there is ice.

    This harks back to another bone of mine, that it is surface ground temperatures that are important for a radiation budget using black body formulae.
    SST is the reality as far as energy balance goes. Air surface temperatures at 2 meters used for the radiation budget in this case would give tens of watts off the true radiation budget of the solid.

    Anna, in general I agree. The problem is, we don’t have data on the ground temperatures and we do on the surface air temperatures.

    As I mentioned before, in practice I don’t think this is too much of a problem. The ground runs hotter than the surface during the day, and colder during the night, so the averages won’t be too far apart. Or at least that’s what my experience says … hmmm … data … data …

    Data on this is very hard to find, so I turn to my bible, Geiger’s exhaustive tome, “Climate Near The Ground”. It shows a graph of tautochrones of temperature both above and below the ground in Nebraska on the 24th of August 1953 (back in the days when climate scientists actually measured things) for a 36 hour period. This is good, because summer would be when we would expect the difference in temperature between the ground and air to be the greatest due to fast ground heating in the hot summer sun. Digitizing it … scan it, straighten it, overlay some lines, get the temperatures for each hour … … OK, thanks for waiting, I get an average ground temperature over 24 hours (using hourly temperatures) of 26.8°, and an average air temperature at 1.5 metres above the ground of 28.9°. So the ground in the summer is a couple degrees warmer than the air at the elevation above ground at which it is measured. In the winter it will be less. Of course, over the ocean the difference will be smaller than on land (in many temperature averages the air temperature is taken as being equal to the sea temperature). And most of the planet is ocean.

    So like I say, in practice this is not a large difference, so we will not be far wrong using the air temperature as a proxy for the ground temperature.

    I can’t recommend Geiger’s book enough, get a copy if this subject fascinates you the way it does me. Not cheap, I got my copy at a college used book store, but worth every penny.

  176. I am just a simple professional civil engineer, so please bear with me. If I would go to a relative “flat” area, such as say Kansas, or Florida, I could, by these “climate scientists'” methodology, take a measurement of elevation at points miles away from each other, and then, design a highway between those points and set my grade lines according to these far away points of elevations (benchmarks), do my earthwork calcs and be quite accurate. What balderdash! I have done earthwork calculations by way of the average horizontal area of cross sections method, as well as by the average end area of cross-sections method, and the average contour area method yields far more accurate results. These “interpolated” values are nothing other than worthless. And, worthless, however better it is than other available methods is still of no real value. A guess is still a guess, no matter the “scientific terminology” in which it is expressed.

    I wouldn’t be setting horizontal grade lines on such scant data, and expect an earthwork quantity balance (equal amounts of cut and fill) and neither should we be accepting these “interpolated” values as having any value, no matter that they are “the best” we can manage. Worthless data gets no better no matter how well it is massaged.

    The point I am trying to make is that the temperature measuring stations were not placed with determining an “average temperature” of any specific land mass area in mind, but for other purposes. As it is, it would be the same as if I, as a highway designer, used random points of elevation to determine how my horizontal grade lines should be set to obtain a balance of earthwork quantities. Not at all a viable concept.

    This attempted use of existing temperature measuring stations (particularly since so many are poorly sited, as Anthony has demonstrated) is a “balls up” venture such as I have ever seen. No good can come of it, and the research money could be spent at far greater value to we, the taxpayers.

  177. What I do not understand: why has no one dropped a couple of weather station onto the ice cap? Sure, it might not live more than a few months, but the cost would be trivial compared to the value of the data.

  178. Re: Willis Eschenbach (Mar 27 00:11),

    … OK, thanks for waiting, I get an average ground temperature over 24 hours (using hourly temperatures) of 26.8°, and an average air temperature at 1.5 metres above the ground of 28.9°. So the ground in the summer is a couple degrees warmer than the air at the elevation above ground at which it is measured. In the winter it will be less.

    Well, your numbers must be reversed.

    If I take the black body formula for 26.8C I get 458 watts/m^2 radiated away
    28.9C 471″

    This is an error of 11 watts in the radiation budget

    If I take your numbers of ice being 0C and air -40C

    for 0C 372watts/m^2 radiated away
    for -40C 162 ”

    an error over 200 watts.

    As these tables show , http://isccp.giss.nasa.gov/products/browsesurf1.html
    if you choose skin surface temperature in the menu,
    and ascii in the output for down loading data, there are differences of more than two degrees on most of the globe:

    One line from the top of the table (must be either arctic or antarctic):

    Air:
    243.912, 244.836, 245.338, 243.642, 243.293, 241.826, 242.496, 245.537
    skin surface:
    224.47 0, 226 .452, 227.834, 225.682, 222.531, 222.986, 227.104, 235.266

    and one line from somewhere in the middle, where the differences are smaller than 1 degree:

    air
    301.578, 301.622, 301.588, 301.574, 301.555, 301.582, 301.605, 301.548
    skin surface
    302.377, 302.388, 302.356, 302.386, 302.501, 302.495, 302.499, 302.361

    In this case, the watts radiated if air temperature is assumed, for example
    301.6 is 469Watts/m^2
    the corresponding skin value is 302.4 and 474.1Watts/m^2

    I am trying to illustrate that talking of radiation budgets with values of 1 and 2 and 4 watts/m^2 is futile when one is not using the correct temperatures in the study.

  179. Richard Telford

    Richard, years ago, I wrote a lot of code for modelling coal seams and mineral deposits (mainly coal seams, though) from borehole logs. The process was started by creating a polygon of influence around all the boreholes simultaneously, with due allowance for known faults etc and then triangulating the polygons. By definition they are all convex so that is a trivial process that can be done in k * log(n) time. I think the problem here is not susceptible to that process because of the very large distances across the “empty” polar region compared with the relatively small distances between the actual stations. An exploration company would, I think, do “infill drilling” above 80N in such a situation.

    Pending getting the station data and performing some reasonableness checks against the empty region, I think the best that could be done would start with some kind of trend analysis – crudely, it must tend to get colder to closer one approaches the pole.

    What do you think?

  180. Willis

    The change in anomaly from one time to another is independent of the choice of the base period used to calculate the normal. However the actual value of the anomaly at any given time is highly sensitive to the choice of the base period and it is the anomaly (compared to 1951 – 1980 as the base period) that is being shown by Hansen, is it not ?

    Do you know why it is that the whole of any given temperature record is not averaged to give the normal from which anomalies are calculated? After all, Mann was able to get the hockey stick partly by using a base period shorter than the record, which is kind of an analogy to my mind.

  181. DMI calculates area-unweighted temperature with 5-degree grid using ECMWF data. So there is more measurement points per area unit when it is moved towards the pole and thus their temperature is not 80N-90N mean temperature. However, for example DMI January 2010 anomaly is about 5.7 degrees C colder than GISS anomaly, when compared to base period 1958-2002. Also February 2010 has several degrees difference to the same direction.

    Different weighting methods should not to lead such a big differences. It is quite sure that GISS has too warm in 80N-90N. According to DMI data responsibles, they are coming to convert their data to true mean temperature in near future. By then, real comparison with GISS can be made (and the result is quite obvious).

    When comparing to HadCRUT, GISS has much warmer global anomalies in January and February 2010 when same base period is used. This difference come from grey areas which are not included in HadCRUT.

  182. I’ve never thought the anomaly was unimportant, or the enemy. I just don’t like the idea of being told to not pay attention to the real temperatures behind the anomaly curtain.

  183. anna v (03:53:32), thanks for your perseverance:

    Re: Willis Eschenbach (Mar 27 00:11),

    … OK, thanks for waiting, I get an average ground temperature over 24 hours (using hourly temperatures) of 26.8°, and an average air temperature at 1.5 metres above the ground of 28.9°. So the ground in the summer is a couple degrees warmer than the air at the elevation above ground at which it is measured. In the winter it will be less.

    Well, your numbers must be reversed.

    I am trying to illustrate that talking of radiation budgets with values of 1 and 2 and 4 watts/m^2 is futile when one is not using the correct temperatures in the study.

    I swear I can’t make that site do a !@#$% thing … every time it just downloads a file that says:

    This script should be referenced with a METHOD of POST.
    If you don’t understand this, see this Forms overview.

    I tried it with both Safari and Firefox, I get nothing.

    When I go to the page in the message, I get nothing, 404, missing page.

    The site also says that the files are in an ftp folder at ftp://isccp.giss.nasa.gov/pub/data/ surface. Of course, being NASA, when you go there and click on the “surface” folder, they’ve misspelled it so it gives a “file not found”. Then when you figure that out and can actually get to the individual data files, they are misspelled as well. And when you hack all the way through that, they are in IEEE binary format. My tax dollars a work. What else can I try?

    OK, new plan. I just downloaded the Google Chrome browser, and I got that to work. I downloaded and averaged the equal-area dataset for SAT and SST (surface air and skin temperatures). That gave me an average air temperature of 288.4K, and a skin temperature of 287.9K. This is a difference of half a degree. So it looks like my intuition was about right.

    Finally, they say that they are giving the “Surface Air Temperature” and the “Surface Skin Temperature”. In their definition of the variables, they say:

    Surface Skin Temperature

    This parameter represents the solid surface physical temperature. As part of the ISCCP cloud analysis, the clear-sky infrared (wavelength of about 11 microns) brightness temperature is estimated at intervals of 30 km and 3 hr. The surface skin temperature is retrieved from these values by correcting for atmospheric emission and absorption of radiation, using data for the atmospheric temperature and humidity profiles, and for the fact that the surface infrared emissivity is less than one. The values of surface emissivity used are shown in the Narrowband Infrared Emissivity dataset. Because these values are determined under clear conditions, they will over-estimate the average daytime – summertime maximum temperatures and underestimate the average nighttime – wintertime minimum temperatures.

    OK, fair enough. And how about for Surface Air Temperature? … well … for that they have no definition at all. None. They give definitions for 18 different variables, but not for that one. So what are they using?

    I ask because this is a page giving satellite derived values for different variables, and I know of no satellite product that can give us the surface air temperature. The closest that you can get is the MSU lower troposphere product. So I haven’t a clue what those numbers they quote represent. If you do, please let me know.

    Let me say, however, that I agree entirely with your closing statement that “talking of radiation budgets with values of 1 and 2 and 4 watts/m^2 is futile”, not because of not using the correct temperatures in the study as you say (although that is a factor), but because we don’t know any of the values to that level of accuracy.

    This is why in my other thread about climate sensitivity (which is where this discussion really belongs), I am using what I clearly identify as estimates, such as “about 150 W/m2″ and “~ 20°C”. We don’t know any of these numbers to any great accuracy.

    We now return you to your regularly scheduled programming, featuring extrapolating temperatures 1200 km into the unknown …

    w.

  184. bradley13 (02:00:15)

    What I do not understand: why has no one dropped a couple of weather station onto the ice cap? Sure, it might not live more than a few months, but the cost would be trivial compared to the value of the data.

    As someone pointed out upstream, this has been done. Let me see if I can find it … OK, it’s here.

  185. Leone (10:05:17), thanks for the information.

    DMI calculates area-unweighted temperature with 5-degree grid using ECMWF data. So there is more measurement points per area unit when it is moved towards the pole and thus their temperature is not 80N-90N mean temperature. However, for example DMI January 2010 anomaly is about 5.7 degrees C colder than GISS anomaly, when compared to base period 1958-2002. Also February 2010 has several degrees difference to the same direction.

    Hmmmm … I couldn’t find where DMI stations are located. From what you say, it sounds like they are using reanalysis output rather than station data. Is this the case? Reanalysis uses a computer to do the filling in of blank spots rather than the 1200 km extrapolation used by GISS. However … it’s still not data, it is the output of a computer model.

    Different weighting methods should not to lead such a big differences. It is quite sure that GISS has too warm in 80N-90N. According to DMI data responsibles, they are coming to convert their data to true mean temperature in near future. By then, real comparison with GISS can be made (and the result is quite obvious).

    Again, subject to ??? if they are using reanalysis data.

    When comparing to HadCRUT, GISS has much warmer global anomalies in January and February 2010 when same base period is used. This difference come from grey areas which are not included in HadCRUT.

    True. Even with GISS we can see the difference by looking at the 250 km smoothing versus the 1200 km smoothing. Here’s that comparison, showing the trends by latitude:

  186. Amino Acids in Meteorites (20:31:11) :

    It was only your line of comment on cigarette smoke that was snipped. Anyone could see that.
    You comments related to this thread are still there. Even though you do not agree with the writer of this thread you were not deleted for it.

    ———-
    You are missing two completely deleted comments:
    Willis Eschenbach (17:18:35) :
    Anu (20:46:33) :

    These comments followed directly from the line, which is still retained in this guest post:
    The Director of GISS is Dr. James Hansen. Dr. Hansen is an impartial scientist who thinks people who don’t believe in his apocalyptic visions of the future should be put on trial for “high crimes against humanity”.

    I pointed out that Dr. Hansen was referring only to the CEO’s of large fossil fuel companies, and the comments followed from that, and the comparison to the large tobacco companies lawsuit.
    Public relations and the public perception of science is a topic interesting to me, but I can see why it might be considered “off topic”.
    Enough said.

    Yes, I believe Willis would not delete a comment of mine just because we disagree. We have disagreed in the past, but his anger makes him argue better, not dirtier.

    ———-
    This would not be the case at RealClimate. Comments that are not in agreement with the writers there are customarily deleted even though they are on topic.
    It would be hard to know what has been deleted, unless it was your comment deleted, or you happen to see it before the Moderator gets it (which I’ve seen on The Guardian, for instance).

    I think almost all opinions should be allowed on the Web – it’s not like its a waste of paper, such as Letters to the Editor in a newspaper. Obvious distractions, like Viagra ads or psychotic rants should be deleted, but there are other ways to handle “unpopular” comments, such as show/hide on YouTube.

    If RealClimate does as you say (I’ve only seen a few articles there, and never commented), I’m against the practice. I would treat them as I do all sites:
    first Moderator deletion – 1 day ban from commenting
    second deletion – 1 week ban
    third deletion – 1 month ban
    fourth deletion – 1 year ban (effectively, lifetime ban, since I’ve never gone back)

  187. Anu (15:54:29) : edit

    Amino Acids in Meteorites (20:31:11) :

    It was only your line of comment on cigarette smoke that was snipped. Anyone could see that.
    You comments related to this thread are still there. Even though you do not agree with the writer of this thread you were not deleted for it.

    ———-
    You are missing two completely deleted comments:
    Willis Eschenbach (17:18:35) :
    Anu (20:46:33) :

    These comments followed directly from the line, which is still retained in this guest post:

    The Director of GISS is Dr. James Hansen. Dr. Hansen is an impartial scientist who thinks people who don’t believe in his apocalyptic visions of the future should be put on trial for “high crimes against humanity”.

    I pointed out that Dr. Hansen was referring only to the CEO’s of large fossil fuel companies, and the comments followed from that, and the comparison to the large tobacco companies lawsuit.
    Public relations and the public perception of science is a topic interesting to me, but I can see why it might be considered “off topic”.
    Enough said.

    Anthony snipped both yours and mine because it was heading for cigarette wars, and thus way off-topic. Fair enough, I have no complaint.

    Yes, I believe Willis would not delete a comment of mine just because we disagree. We have disagreed in the past, but his anger makes him argue better, not dirtier.

    Thanks, Anu. I view the snipping of any on-topic scientific comment as a high crime, and never do it no matter how much I disagree with it. Neither, as far as I know, does Anthony.

    ———-

    This would not be the case at RealClimate. Comments that are not in agreement with the writers there are customarily deleted even though they are on topic.

    It would be hard to know what has been deleted, unless it was your comment deleted, or you happen to see it before the Moderator gets it (which I’ve seen on The Guardian, for instance).

    I think almost all opinions should be allowed on the Web – it’s not like its a waste of paper, such as Letters to the Editor in a newspaper. Obvious distractions, like Viagra ads or psychotic rants should be deleted, but there are other ways to handle “unpopular” comments, such as show/hide on YouTube.

    If RealClimate does as you say (I’ve only seen a few articles there, and never commented), I’m against the practice. I would treat them as I do all sites:
    first Moderator deletion – 1 day ban from commenting
    second deletion – 1 week ban
    third deletion – 1 month ban
    fourth deletion – 1 year ban (effectively, lifetime ban, since I’ve never gone back)

    Realclimate is famous for censoring scientific questions and statements that don’t agree with the party line. Take a look here. I wrote a peer-reviewed article on the subject available here.

  188. FTA: ” In that paper, they note that annual temperature changes are well correlated over a large distance, out to 1200 kilometres (~750 miles).”

    Which temperatures? Temperatures at the equator? Temperatures in Asia? If they haven’t compared temperature records over a lengthy period at the Arctic, how can they know what correlations exist there? The poles are precisely where you would expect markedly different behavior than anywhere else.

    Another questionable tactic in all these plots is the range of colors used. A lay person looking at these pictures would naturally think that red is a lot different from blue. In reality, the whole map should be differing shades of blue.

  189. Re: Willis Eschenbach (Mar 27 12:50),

    Thanks for doing the average, and thanks for digging out the definition of :Surface Skin Temperature

    This parameter represents the solid surface physical temperature. As part of the ISCCP cloud analysis, the clear-sky infrared (wavelength of about 11 microns) brightness temperature is estimated at intervals of 30 km and 3 hr. The surface skin temperature is retrieved from these values by correcting for atmospheric emission and absorption of radiation, using data for the atmospheric temperature and humidity profiles, and for the fact that the surface infrared emissivity is less than one. The values of surface emissivity used are shown in the Narrowband Infrared Emissivity dataset. Because these values are determined under clear conditions, they will over-estimate the average daytime – summertime maximum temperatures and underestimate the average nighttime – wintertime minimum temperatures.

    Seems to me that the air currents motions are not accounted for, so they are not really getting the surface temperature. Another $%^& computer program :(.

    For example if you look at the arctic temperatures on http://ocean.dmi.dk/arctic/meant80n.uk.php, you see huge variations that can only be air motions at this season. Estimating the temperature from these air temperatures, ( brightness in kilometer height) cannot but be off. Another, summer in Greece where you cannot walk barefoot on rock but the air, coming from seasonal winds from Siberia and cooling it , is 36C.

    But you are right, this belongs to the sensitivity thread which is too many pages removed :(.

  190. Bart (18:43:43)

    FTA: ” In that paper, they note that annual temperature changes are well correlated over a large distance, out to 1200 kilometres (~750 miles).”

    Which temperatures? Temperatures at the equator? Temperatures in Asia? If they haven’t compared temperature records over a lengthy period at the Arctic, how can they know what correlations exist there? The poles are precisely where you would expect markedly different behavior than anywhere else.

    Read the Hansen paper I cited in the article. They show the correlations by latitude band. Arctic temperatures are correlated greater than 0.5 out to 1200 km … which means nothing about the trends.

  191. Willis Eschenbach (23:00:06) :

    “…which means nothing about the trends.”

    I would say “means little which could be useful in projecting trends…,” (in my profession, a 0.5 correlation coefficient would be interpreted as “not very well correlated”), if it were true in the first place, but I have significant doubts about even that.

    As usual, key information is absent. But, it appears to me that the higher latitude stations are likely located along similar latitude lines. I would fully expect that correlations would be less sensitive to longitude than they are to latitude, and that relatively high correlation among readings from neighboring stations at similar latitude would hardly be surprising. But, that does not imply that you could just draw a ring around each station and say everything in that ring has the same correlation as readings in the lateral direction. I would expect that, at the very least, contours of constant correlation would be elliptical.

    It’s kind of like, when they came out and said “we are 90% certain that global warming is blah, blah, blah.” Even a high school kid knows that, when you make up a number, you have to add some decimal places to make the grader think you at least had some basis for it, that some actual calculations were involved. If they had said “we are 91.7% certain,” it would have had a bigger impact. Here, they should have used ellipses.

    Indeed, Figure 3 shows several points at 1200 km which have very low and even negative correlation. I suspect those are readings between stations with significantly different latitude. And, needless to say, the extrapolation over the pole involves varying the latitude quite significantly.

    I’m not any kind of expert on atmospheric physics, but I would imagine that mixing of the atmosphere would be rapidly changing as you get closer and closer to the pole. I do not think it likely that points located 11 deg or more from the pole would be representative of what is happening there.

  192. Willis Eschenbach: “Hmmmm … I couldn’t find where DMI stations are located. From what you say, it sounds like they are using reanalysis output rather than station data. Is this the case?”

    ECMWF is a forecasting model. I don’t exactly know what kind of data is used as input but I suppose that all kind of data which can be obtained is used. Thus DMI data product can be regarded as the best knowledge that we have about 80N-90N temperatures.

    When compared 2001- winter months with GISS there can be found several cases with similar divergence. Probability of divergence increases with increasing years – and the direction is always the same: GISS is showing warmer. I hope that DMI is able to publish true mean temperatures as soon as possible. Maybe they hurry more if it is requested by many…

    GISS grey area handling is truly worth closer look, because much warming is calculated from there. If DMI will show different results, the whole GISS grey area handling can be questioned.

  193. Willis,

    You appear to be unwilling to test if the extrapolation to 1200km is skilful. If extrapolations to 1200km lack skill, despite the strength of the correlation between climate stations this far apart, then you have the basis for a manuscript criticising GISTEMP (and you would be a hero to most of the readers here). On the other hand, if the extrapolations are skilful, then you owe James Hansen and coworkers an apology. Is it the risk of this latter eventuality than prevents you from trying to prove your argument?
    This isn’t a complicated analysis to run, it doesn’t require a mainframe, and the data is all public domain. What’s stopping you?
    100 lines of my R code says you owe Hansen an apology. Extrapolations from stations >60N are skilful (median station has positive RE statistic) at 1200km (indeed out to over 2000km – which was as far as I tested).

  194. Richard Telford (16:00:17)

    Willis,

    You appear to be unwilling to test if the extrapolation to 1200km is skilful. If extrapolations to 1200km lack skill, despite the strength of the correlation between climate stations this far apart, then you have the basis for a manuscript criticising GISTEMP (and you would be a hero to most of the readers here). On the other hand, if the extrapolations are skilful, then you owe James Hansen and coworkers an apology. Is it the risk of this latter eventuality than prevents you from trying to prove your argument?

    I tested the correlation between trends in the head post. As you can see, despite the fact that the correlations are in the range Hansen used (greater than 0.5), the Alaskan trends are all over the board.

    So it’s not clear what you are talking about. How is that not a test of whether the extrapolation is skillful?

    This isn’t a complicated analysis to run, it doesn’t require a mainframe, and the data is all public domain. What’s stopping you?

    100 lines of my R code says you owe Hansen an apology. Extrapolations from stations >60N are skilful (median station has positive RE statistic) at 1200km (indeed out to over 2000km – which was as far as I tested).

    Richard, you are not following the story. I never said that the correlations are not large out to long distances. They are. I said that the correlation means nothing about the trend. See how well your trends are correlated, and report back to us.

    Because if, as you say, “extrapolations … are skillful … out to over 2000 km”, why screw around? We can take one station in the Arctic and cover the entire Arctic ocean, think of the money we’ll save closing the others, waste of time really …

    PS – I love people who say I shouldn’t study this, I should study that, or I shouldn’t run this analysis, I should run that analysis. Truth is, I analyze what is of interest to me. I stop when I have found out what is happening.

    Having looked at theoretical “pseudo-temperatures”, and having seen how extremely poorly the good correlation of the Alaska stations is reflected in their trends, that’s all I care to do. I picked those stations at random because they had long records, no cherry picking, didn’t throw any stations out. If correlated stations in Alaska do that badly, I’m not interested in wasting my time on a detailed 1,200 station analysis. This is particularly true when I have shown five pseudo temperature records that are all correlated more than 0.90, yet have hugely different trends.

    So Richard, you are more than welcome to run the analysis as far as you wish. I have shown that both in theory and in practice, correlation and trends don’t have much to do with each other. That’s enough for me. If you find a reliable way to use one temperature station to reliably predict the trend of some other station 2,000 kms away, let us know how. Until then, I’m happy with the amount I’ve done.

    Finally, if you want to get some traction, post your code and let some people play with it. From your description I can’t tell what you’ve done … what, for example, is a “median station”?

    Thanks,

    w.

  195. Leone (01:54:20)


    Willis Eschenbach: “Hmmmm … I couldn’t find where DMI stations are located. From what you say, it sounds like they are using reanalysis output rather than station data. Is this the case?”

    ECMWF is a forecasting model. I don’t exactly know what kind of data is used as input but I suppose that all kind of data which can be obtained is used. Thus DMI data product can be regarded as the best knowledge that we have about 80N-90N temperatures.

    If that’s the best we have, we’re in big trouble. Take a look at the last day temperature for one year and the first day temperature of the following year. For example, the year 1999 ends at about 246K. The next year, 2000, starts up at 262K, a full 16K greater.

    How unusual is this? The standard deviation of the day-to-day changes over the last 15 years of the record is 1.0K. The biggest daily change in that time (ignoring year-end changes) is 5.9K

    From the end of one year to the next, on the other hand, the standard deviation of the one day change is 6K, more than the biggest change in the rest of the data. And six of the fifteen years have a last-day-to-first-day change of more than 5K.

    This is all too typical in the climate model sphere. They come up with some brilliant program, and run it … but then they don’t error check it in any adequate fashion, and we end up with garbage. So at this point, we can’t trust the DMI data at all. There’s something seriously wrong at the changeover of the years, and we don’t know how far it goes. Might be trivial and insignificant, might be big, we don’t know …

  196. Hi Willis,

    Great analysis.

    The correlation between data x_i and y_i is maximal, 1, precisely when the distance of x_i to its mean is proportional to the distance of y_i to its mean.

    This means that you can take any data x_i and move to y_i by
    -adding the same quantity to each x_i
    -multiplying each x_i by the same positive amount
    and the correlation will still be 1.

    Take e.g 1,2,3,4.
    Add -10 to each to get -9, -8, -7, -6
    Multiply each by 100 to get
    -900, -800, -700, -600
    The correlation of the latter to 1,2,3,4 is still 1.

    That leaves A LOT of leeway to making up functions with perfect correlation…

    That’s what was exploited by Hansen: it’s voodoo math – or rather good math used for voodoo purposes.

    And a psychological point. The aim of Hansen isn’t only skewing the trend. The scary red colored map is a goal in itself…

Comments are closed.