Arctic isolated versus "urban" stations show differing trends

I’ve reposted this here in entirety with permission from Pierre Gosselin of “No Tricks Zone“, and it is well worth the read. Much of this work was inspired by posts that have appeared on WUWT. Ed Caryl has done a great job pulling various threads of info together. One generally doesn’t think of any Arctic circle outposts as being “urban” but the fact is that islands of humanity, essentially small towns, surround many of these stations. And in the Arctic, you produce a lot of energy (which has to go somewhere) or you die. What I find most interesting is the plot of “isolated” stations versus the Atlantic Meridonial Oscillation (AMO); a clear correlation of the driver for those temperatures.. – Anthony

Light In Siberia

By guest writer Ed Caryl

Arctic stations near heat sources show warming over the last century. Arctic stations that are isolated from manmade heat sources show no warming. The plots of ā€œisolated stationsā€ and ā€œurban stationsā€ below clearlyĀ illustrate the differences.

Stevenson Screen, Verhojansk, Russia

All the GISS temperature anomaly maps show the Arctic warming faster than the rest of the globe, especially northern Alaska and Siberia, but the satellite data shows a different pattern. See the 2 charts for 2009 that follow. The GISS surface map:

Satellite chart:

The baseline period selected for the GISS surface temperature chart is the 1933 to 1963 Atlantic Multi-decadal Oscillation (AMO) warm period. This period more closely matches todayā€™s temperatures than the default 1951 to 1980 cool period that GISS uses. The satellite data uses the average over the satellite period since 1979, the modern warm period.

The satellite data show cooling in central Siberia, similar to the surface anomaly map, and very little warming for most of Alaska. It also shows cooling for the Antarctic Peninsula, where the surface map shows warming. But there is a scattering of hot grid squares across the HISS surface station map for the Arctic. So what is going on?

I selected the stations that correspond to those warm grid squares, as well as other stations in the same latitudes. In this age of everyone carrying a camera posting all photos on the Internet, there is a lot of information available on these stations. For some I could locate the Stevenson screens, for most Iā€™ve found pictures of the surroundings, whileĀ others have investigated many of these sites already, and so links to that research are included. I downloaded the raw temperature data from GISS for 24 stations closest to the North Pole, which are all classified as ā€œruralā€.

ā€œUrbanā€ ArcticĀ Stations

Contrary to GISS claims, many of these stations are actually not ā€œruralā€ with respect to their siting quality. Many are at airports associated with sizable towns or research stations with sizable staff and infrastructure. In the Arctic, any town of more than a few families can be a large heat source. In the case of many towns in Russian Siberia, ā€œcentral heatingā€ takes on a whole new meaning. These towns have a central power plant that provides electricity and steam heat to the whole town. Large pipes, both insulated and un-insulated, carry steam, water, and sewage, up and down the streets to and from each dwelling. These pipes cannot be buried because of the permafrost, so they are elevated, and at street crossings are elevated 4 or 5 meters. The temperature differential between these pipes and the surrounding air can be 140Ā° C in winter, and even more for a pressurized system.

But GISS applies the same Urban Heat Island (UHI) criteria to all stations globally, regardless of the latitude or average temperature. They look at the satellite night brightness and population to judge whether urban or rural. By GISS criteria, all the stations in the high Arctic are rural; there are no corrections for UHI.

But letā€™s look at each of these ā€œurbanā€Ā locations. Each name is also a link to the GISS surface temperature raw data.

List of Urban Arctic Stations (see the annex at the end of this post for details on each station)

1.Ā Kotzebue, Ral (66.9 N,162.6 W),Ā Alaska

2. Barrow/W. Pos (71.3 N,156.8 W) Alaska

3.Ā Inuvik (68.3N, 133.5W) Inuvik, Canada

4. Cambridge Bay (69.1 N,105.1 W) Nunavut, Canada

5. Eureka, N.W.T. (80.0 N,85.9 W), Canada

6. Nord Ads (81.6 N,16.7 W Northeast Greenland

7.Ā Svalbard Luft (78.2 N,15.5 E),Ā Norway

8. Isfjord Radio (78.1 N,13.6 E), Norway

9. Gmo Im.E.T.(80.6 N,58.0 E), Russia

10. Olenek (68.5 N,112.4 E), Russia

11.Ā Verhojansk (67.5 N,133.4 E), Russia

12. Cokurdah (70.6 N,147.9 E), Russia

13. Zyrjanka (65.7 N, 150.9 E), Russia

14. Mys Smidta (68.9 N,179.4 W), Russia

15. Mys Uelen (66.2 N,169.8 W), Russia

The following graphicĀ is a temperature chart for 10 of the above stations (5 of the shorter ones were left out to avoid over-crowding). All are warming, some faster than others. Barrow, for which we have the UHI study, is not the fastest warming.

Temperature trends of the “urban” stations.

Isolated Stations

Now let us look at the isolated stations, which are located at similar latitudes like the above ā€œurbanā€ stations. One important thing to note about these isolated stations ā€“ there is limited electrical power, and so incandescent light bulbs in the Stevenson screens is unlikely. Detailed descriptions of these stations are listed in the annex at the end of this report.

16. Alert,N.W.T.(82.5 N,62.3 W), Canada

17. Resolute,N.W. (74.7 N,95.0 W), Canada

18. Jan Mayen (70.9 N,8.7 W), Canada

19. Gmo Im.E. K. F (77.7 N, 104.3 E), Tamyr Peninsula, Russia

20. Ostrov Dikson (73.5 N,80.4 E, Russia

21. Ostrov Kotelā€™ (76.0 N,137.9 E), Russia

22. Mys Salaurova (73.2 N,143.2 E), Russia

23. Ostrov Chetyr (70.6 N,162.5 E), Russia

24. Ostrov Vrange (71.0 N,178.5 W) , Russia

Now here is the chart of the temperatures of these isolated stations, not subjected to manmade structures or heat sources.

Isolated Stations

Note that most of the trends are flat or decreasing. Only Resolute and Ostrov Vrange are increasing slightly. Both of those might be slightly influenced by UHI. The longest records clearly show warming in the late 1930ā€™s and 40ā€™s, and cooling in the 1960ā€™s, and none show a hockey stick. The GISS data for Alert ends in 1991, though the weather station is still there, and still reporting. Data for Mys Salaurova and Ostrov Chetyr also ends at about that time, probably due to the fall of the Soviet Union.

Here is an average of all the isolated stations:

Isolated Stations – Average

Note that the peak-to-peak trend is nearly zero. The linear trend is about 0.4Ā°C/century, but the R2 value (the statistical significance for the trend) is very low, 0.023.

Here is a plot of the AMO versus the average temperature of the isolated stations.

The temperature as measured at stations isolated from any UHI is simply tracking the AMO.

Looks like an awfully good fit. There is very little, if any, global warming. We need to wait until the bottom of the next AMO cycle to get a decent reading of global temperature change. That will be in about 2050 if the AMO cycles as it has since 1850.

ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”ā€”-

Annex ā€“ station descriptions

The ā€œurbanā€ stations, nos. 1-15

1. Kotzebue, Ral (66.9 N,162.6 W),

2. Barrow/W. Pos (71.3 N,156.8 W)

These towns are of similar size, and are growing at the same rate. In 1940, both towns had a population of 400. In 1980 both had just over 2000 population, and now they both have over 3000 people. Both have airports of sufficient size to handle multi-engine turboprop and small jet aircraft, and both are served daily by regional airlines. Kotzebue is on a peninsula and the airport is across the middle of the peninsula, somewhat restricting the growth of the town. Barrow has somewhat the same problem due to a series of small ponds around the town and the airport. Barrow was studied for UHI effects in 2003. That paper was in the International Journal of Climatology here. That paper describes the UHI average temperature increase in winter as 2.2Ā°C compared to the surrounding hinterland. GISS data indicates that Barrow average temperature has increased over the years as population has increased. (See below, or click on link above.)

Barrow, AK

Source: http://en.wikipedia.org/wiki/File:BRW-g.jpg

Source: http://en.wikipedia.org/wiki/File:OTZ-g.jpg

The Barrow NWS station (Stevenson Screen) is here. On the airport picture, it is at the base of the rotating beacon tower. Kotzebue NWS station is not visible in published pictures.

3. Inuvik (68.3N, 133.5W)

Inuvik is a relatively new town, begun in 1954. The population as of 2006 has grown to about 3500 people. Because it is a ā€œplannedā€ community in the arctic, built on permafrost, the water and sewage infrastructure is above ground in heated and insulated ā€œutilidorsā€, like the heating systems in Siberia. The weather station, from weather reports, Google Earth and Google Street View, appears to be at the airport, in a compound just north of the entrance.

4. Cambridge Bay (69.1 N,105.1 W), Cambridge Bay, Nunavut, Canada

There is a Wikipedia picture of Cambridge Bay here. The population has grown from just a few people in the 1940ā€™s to about 1500 today. It also has an airport with daily regional airline service.

5. Eureka, N.W.T. (80.0 N,85.9 W), Eureka, N. W. T., Canada

There are the only four stations at or north of 80Ā° latitude, Eureka, Alert, Nord and Krenkle (Gmo. I.M.ET). Only Eureka has an unbroken temperature record to the present date, and it begins in 1947. The population at Eureka has

Eureka station

never been high. In winter it has always been 4 or 5 men. In summer, the population increases to as high as 20. The station infrastructure though, has expanded through the years. Each year, some of those 20 workers add or expand buildings. In the beginning, it was one or two buildings, with water and sewage handled in tanks and barrels internal to the buildings. The Stevenson screen was originally placed where the blue New Main Complex building is now. When that was built, the Meteorological instruments were moved to the current location. Over time, the water supply, plumbing, and sewage treatment was upgraded and the outfall pipe installed. It, of course, must be heated to facilitate flow to the sewage lagoon. All the water pipes exposed to the outdoors must be heated to prevent freezing.

Image from a recent article by Anthony Watts on WUWT here.

6. Nord Ads (81.6 N,16.7 W, Northeast Greenland

Nord Station

Nord is the furthest north inhabited place on earth, on the Northeast coast of Greenland. It was built in the period from 1952 to 1956 as an emergency airfield for aircraft operating out of Thule. Access is impossible by sea because the sea ice never moves away from the coast there. Legend has it that ā€œBlowtorchā€ Murphy, a mythic arctic construction worker, scraped the first runway, using a parachute dropped caterpillar tractor after he himself parachuted onto the site. His nickname came from his habit of wearing a lit blowtorch hanging from his waistband on a wire; a lit blowtorch being somewhat useful when working outside when itā€™s 40Ā° below zero.

There are about 40 buildings at Nord. Not all of them are continuously heated, but those near the Stevenson Screen are. The winter population is 5 or 6 men. More pictures here.

7. Svalbard Luft (78.2 N,15.5 E)

8. Isfjord Radio (78.1 N,13.6 E)

These two stations are only 47 kilometers apart. But data for both is fragmentary for 1976 and 1977, and there is no overlap. Svalbard Luft (airport) has been discussed on WUWT here and here, so I wonā€™t cover it in detail here. Warwick Hughes has an article on Isfjord Radio here that makes the case for warming of Isfjord Radio due to moving of sea ice away from the islands in summer since 1912. Neither station in Svalbard shows on the anomaly map because there was no common station in both the base period and the anomaly period. Hereā€™s a map with 1998 to 2008 as the base period where Svalbard appears.

9. Gmo Im.E.T.(80.6 N,58.0 E)

This is the Krenkel meteorological station on Hayes Island, or Ostrov Kheysa in Russian, in the Franz Josef Land Archipelago, Russia. Link The station has been moved or re-built twice since it was established. It was moved from Hooker Island (article in German) in 1957/58. A fire destroyed the power station in 2000, and it was rebuilt in 2004 closer to the shoreline. The GISS record is from 1958 with a gap from 2001 to 2009. The population was as high as 200 during Soviet times, but is down to 4 or 5 now. The population and the temperature seem to track roughly during Soviet days, and the move in 2004 was to a warmer location. In the picture you can see the old buildings on the ridge in the distance. The red grid-square on the anomaly map above corresponds to this station.

Source: http://www.sevmeteo.ru/foto/15/88.shtml

10. Olenek (68.5 N,112.4 E)

This is the town of Ustā€™-Olenek, Russia.

Photo sources here and here.

The town doesnā€™t look like much, but notice the Tundra Buggies parked next to the Stevenson Screens. It is on the Laptev Sea, on the northern Siberia Coast, but on a peninsula on a south-facing beach. The buildings are right on the shore. The wide view above was taken from out on the ice. This is one of the few places in Russia that the Google Earth satellite view actually has enough resolution to see the Stevenson Screens. They are much too close to the heated building.

11. Verhojansk (67.5 N,133.4 E)

This is one of the ā€œcentrally heatedā€ towns in Russian Siberia. The picture at the top of this article is of the Stevenson Screen. Verhojansk is called the ā€œcold poleā€ of the earth, but the measurements are too warm by far. Look closely at the picture. Any photographer will note that the warm glow inside the Stevenson Screen is just the color temperature of an incandescent light bulb. If the steam heat in the town isnā€™t enough, or the cattle in the pole-barns in the distance, the heat from the light bulb will warm up the measurements. This site was covered on WUWT here and here. Anthony Watts notes that warm anomalies would appear and disappear in this part of Russia ā€œas if a switch were thrownā€. Could it be as simple as the switch on that light bulb?

12. Cokurdah (70.6 N,147.9 E)

Also spelled Chokurdakh. The population has been dropping in recent years, but was still over 2500 people in 2002. The town is sandwiched between the Indigirka River and the airport. There is no way to tell where the Stevenson Screen is located, but the infrastructure at the airport blends right into the town. See an aerial photo here.

13. Zyrjanka (65.7 N, 150.9 E) Also spelled Zyryanka, another steam-heated town in eastern Siberia, well inland. The airport is in this picture on the north edge of town, along the Kolyma riverbank. This airfield was built during WWII as a stop for aircraft being ferried to the Soviet Union from Alaska. A second airport 7 miles west of town was probably built during the cold war for the military. The town was established in 1931. The population is currently about 3500. During the Soviet Union it was up to 15,000.

14. Mys Smidta (68.9 N,179.4 W)

Or Cape Schmidt.Ā  John Daly wrote a bit about this location in 2000 (scroll way down in the article). The population was nearly 5000 in 1989, but has dropped since the fall of the Soviet Union. The population now is probably less than 1000. It is on the north coast of eastern Siberia, nearly at 180Ā° longitude. The airbase there was built in 1954 as a staging base for any bombers headed for the U. S. It is still used by a regional airline.

15. Mys Uelen (66.2 N,169.8 W)

Or Cape Uelen. This is on the easternmost tip of Siberia, across Bering Strait from Kotzebue, Alaska. The current population is about 700 people. It is also centrally steam heated. The town is restricted by the geography, on a narrow spit sticking out into the sea, backed by a cliff on the landward side. The airport is a helipad. Cargo and fuel arrives by barge in the summer.

Below is a temperature chart for many of the above stations. All are warming, some faster than others. Barrow, for which we have the UHI study, is not the fastest warming.

16. Alert,N.W.T.(82.5 N,62.3 W), Alert, Canada

Alert, Canada has had a weather station since 1951. The population has never been more than 4 or 5 in the winter, with a higher population in the summer. I could not definitively locate the Stevenson screen, but there are two possibilities in this photo, both well away from the buildings.

THE ISOLATED STATIONS, NOS. 16-24

17. Resolute,N.W. (74.7 N,95.0 W)

The population of this Canadian station rose from zero prior to 1947, to 229 in 2006. There is an airport here, and the Stevenson Screen can be seen across the aircraft parking area from the airport terminal at the left edge of the photo.

18. Jan Mayen (70.9 N,8.7 W)

Pictures of the station are here, and a web site is here. The 18 people on the island live at Olonkinbyen, or Olonkin ā€œCityā€. The meteorological station is 2.6 km away. The 4 people that work there live in Olonkin City. The Stevenson Screen appears to be well away from the station building, and the surroundings have probably not changed since the station was built.

19. Gmo Im.E. K. F (77.7 N, 104.3 E)

This is a Russian station on the Tamyr Peninsula at Cape Chelyuskin (Mys Chelyuskin). Nothing is visible at that location on the Google satellite view, but the resolution is very low. I found an article by Warwick Hughes dated September 2000 that speaks of cooling of the Tamyr Peninsula here. He also talks about ā€œnon-climateā€ warming of Verhojansk and Olenek.

20. Ostrov Dikson (73.5 N,80.4 E

Dikson, Russia airfield

This is Dickson Island in English. There is a town of Dikson 10 kilometers away on the mainland. The airport is on Dikson Island at the point called Ostrov Dikson on the map below. Pictures of the airport can be seen here. The town is pictured on this 1965 stamp.

Wikipedia link

21. Ostrov Kotelā€™ (76.0 N,137.9 E)

The full name is Ostrov Kotelā€™nyy. In English this is Kettle Island. The first documented explorer found a copper kettle, so obviously he was not the first person to find the island. A single building is barely visible on Google 3D mapsat the ā€œsettlementā€ known as Kalinina.Ā  This may be the meteorological station. No other signs of civilization can be seen on the whole island.

22. Mys Salaurova (73.2 N,143.2 E)

This also spelled Mys Shalaurova. The station is on the south-facing shore of an island and is visible on Google Earth here. There is a tide gauge, and the tide data is on that same page.

23. Ostrov Chetyr (70.6 N,162.5 E)

The full name is Ostrov Chetyrekhstolbovoy. This is a small island in the East Siberian Sea in the Medvezhy Island (Bear Island) group.

Map source here.

A description of the place is found: here. ā€œA polar meteorological station and a radio station are situated on the shore of a small bay which indents the S side of the island.ā€

24. Ostrov Vrange (71.0 N,178.5 W)

This is otherwise known as Wrangle Island. It is about 125 kilometers off the Siberian coast on the 180th meridian. The weather station is at Ushakovskiy on a spit at Rogers Bay, at the right in this picture, well separated from the village. One building in the village is visible at the left. Link

The population in the village grew to as many as 180 people in the 1980ā€™s, but when the Soviet Union dissolved, subsidies declined and the population moved to the mainland. The last villager was killed by a polar bear in 2003. The population at the weather station, when occupied, has always been 4 or 5.

0 0 votes
Article Rating
121 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
September 22, 2010 8:04 am

Arctic is very little affected by climatic conditions elsewhere in the Northern hemisphere, while it is the critical factor of the North Atlantic temperatures movements.
http://www.vukcevic.talktalk.net/NFC1.htm

Murray Duffin
September 22, 2010 8:42 am

Barrow was served by ski plane in winter and float plane in summer until the airstrip was built, which was less than 20 years ago if memory serves. As I recall the airport matches a jump in the Barrow temperature record.

Thomas
September 22, 2010 8:50 am

It really is worse than I thought! Corrupted data wherever you look. Thanks for bringing this to our attention!

Editor
September 22, 2010 8:51 am

This is a great post. It is something that has been crying out to be written for some time. With mapping station data it was pretty clear to us that there are many stations in the Arctic where, for the period up to 1939 and the more recent period, warming is similar in both rate and extent. The ‘sine wave’ that follows the AMO shows up in the data forArctic stations we included here:
http://diggingintheclay.wordpress.com/2010/09/01/in-search-of-cooling-trends/
The ‘cooling trends’ of the title are only due to the temperatures in the 1930s being higher than present.
Incidentally in GHCNV3, our preliminary analyses show extended periods for many Arctic stations (among others) – i.e. data for longer periods than in GHCNV2. This is particularly noticable in the Canadian Arctic. Blog post on it soon.

September 22, 2010 9:00 am

Excellent. This is important data. If places like CRU were not so distracted by having to make the case for AGW, they could be doing this sort of basic, decent work to add to our knowledge.

September 22, 2010 9:07 am

Ed Caryl: Thanks.

Maud Kipz
September 22, 2010 9:08 am

the R^2 value (the statistical significance for the trend) is very low, 0.023

This betrays a fundamental misunderstanding of statistics. The R^2 value is a measure of linear dependence between two (random) variables, and nothing more. A trend with extremely high significance may still have a low R^2 if the trend is non-linear or if the observations are noisy.
Can you clarify how you classified sites as “urban” and “rural”? The inclusion of Eureka and Nord Ads in “urban” seems problematic, as you mention that each site has a winter population of less than ten. If these sites are switched to “rural”, then the “rural” sites are on average at a higher latitude than the “urban” ones (Wilcoxon rank sum test, p = 0.014) and that alone may explain away the whole pattern.

September 22, 2010 9:21 am

“and that alone may explain away the whole pattern.”
Not so. It is the trend that is being considered, not the absolute temperature.

September 22, 2010 9:25 am

Interesting. Here is what I do not like about his work, at least as I understand it — I would greatly prefer to see this work done on some sort of double-blind system. One group, without any knowledge of station temperature numbers, sorts the stations (e.g. urban vs. rural) while another works on the temperature trends. This way there is no danger of the sorting decisions being pre-biased by knowledge of their characteristics (something that arguably happens all the time in dendro-climatology).

Tim F
September 22, 2010 9:32 am

I’m sure these stations are not cheap, but they are not that expensive either (compared to, say, the cost of CO2 reduction). It should be simple to get a set of ~10 identical stations set at various locations within a few km of each other.
Place them varying distances and directions from a location like Barrow AK. Then see if there is a correlation for temperature with various factors like distance, direction, wind direction, wind speed, etc. For example, if the “downwind” thermometers tend to read high, then that would be a clear indication of the UHI effect.
The same would apply to ANY site. This seems like a simple step to quantify the significance of UHI affects. With matched thermometers, it should be easy to spot differences of 0.1 C from station to station.
One of the hallmarks of a good scientific study is repeatability. Before investing trillions in mitigation, we should invest a few million in studying the equipment and siting. I have never heard of such a study, but I admit I am not familiar with climatology studies. Does anyone know of such an experiment ever being done?

DaveJR
September 22, 2010 9:37 am

Maud Kipz wrote: “If these sites are switched to ā€œruralā€, then the ā€œruralā€ sites are on average at a higher latitude than the ā€œurbanā€ ones and that alone may explain away the whole pattern.”
Can you volunteer any hypothesis as to why that may be? AFAIAA, the arctic is postulated to be one place where the warming effect of CO2 will be most visible ie high warming trends. Why then, as you travel further north, would the warming trend seem to completely disappear? These results need careful examination.

Ed Caryl
September 22, 2010 9:41 am

Yes, Nord has less than 10 people in the winter. But they have a lot of infrastructure to maintain. They keep the runway plowed 300 days a year. This requires the snowplows to be kept in heated garages, otherwise there is no starting them. They keep many sled-dogs there because it is also a base for the Sirius Patrol, so the population is more than people. Fuel tanks and the power generator are outside and heated. Diesel fuel is a thick gell at -40 C, so must be heated.
One of the problems with arctic stations is that no one wants to go very far outside in the winter, so the instruments tend to be too close to the heated buildings.

Ed Caryl
September 22, 2010 9:47 am

“double-blind system”?
A bit hard to do when there is just me!

Ed Caryl
September 22, 2010 9:51 am

Tim F
Yes, it was done for Barrow. See the link in the Barrow discussion.

GeoFlynx
September 22, 2010 9:57 am

Ice doesn’t melt by suggestion or supposition and the overall Arctic trend is undoubtedly downward. Have you looked at the most recent ice extent data (“ruh-roh!”)?

September 22, 2010 10:07 am

“Ice doesnā€™t melt by suggestion or supposition and the overall Arctic trend is undoubtedly downward”
Arctic sea ice melts primarily from warm water enering the Arctic ocean past Spitzbergen. There has been lots of such water of late following the effect of 30 years of strong El Ninos working it’s way poleward.
Such ocean cycles are perfectly capable of reducing Arctic sea ice even whilst the Arctic air might be on a slow downward trend if UHI effects are properly excluded.

September 22, 2010 10:17 am

While publishing their article in English, authors should use correct English transliterations of Russian geographical names, rather than mixing German- and English-style transliterations (for example, should be “Verkhoyansk”, not “Verhojansk” — the latter is how Germans write it).

Wolley
September 22, 2010 10:18 am

Have you looked at the most recent ice extent data (ā€œruh-roh!ā€)?
Have You?
http://www.ijis.iarc.uaf.edu/seaice/extent/AMSRE_Sea_Ice_Extent_L.png

John F. Hultquist
September 22, 2010 10:24 am

Maud Kipz says: at 9:08 am
ā€œThis betrays a fundamental misunderstanding of statistics.ā€
That caught my attention also. Regarding ā€œrā€ or the Pearson product-moment correlation coefficient:
ā€œIt is widely used in the sciences as a measure of the strength of linear dependence between two variables.ā€
http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
The graphic with the above article might be helpful for some folks. The following is more directly related to the issue raised by Maud Kipzā€™s comment:
Regarding r^2 or the ā€œCoefficient of Determinationā€
ā€œis useful because it gives the proportion of the variance (fluctuation) of one variable that is predictable from the other variable.ā€
http://mathbits.com/mathbits/tisection/statistics2/correlation.htm
Note the word ā€œproportionā€ in the quote.

P Gosselin
September 22, 2010 10:28 am

As I prepared Ed’s essay for WordPress, I also checked many other surface station plots of other locations.
I noticed a lot of boomerangs out there, and very few hockey sticks.

Paul Wescott
September 22, 2010 10:42 am

Tim, see the Hinkel study cited for Barrow UHI. Basically as you describe.
Murray, State archives show Barrow airport construction in the 60s. I suspect that there was an earlier crude strip in support of NPR-A and DEW line work. Latest temp trend jump takes place as pipeline worked ramped up. The town has doubled in size and its housing and other facilities have been constructed or steadily improved, enlarged, etc., over the last 30 years.

Michael Larkin
September 22, 2010 10:47 am

I thought this was a terrific, painstaking article. I immediately recalled the posts here by Roy Spencer about UHI vs population density (identified with the help of Rick Wermer’s excellent guide, see large blue/purple button at top right of WUWT home page):
http://wattsupwiththat.com/2010/03/03/spencer-using-hourly-surface-dat-to-gauge-uhi-by-population-density/
http://wattsupwiththat.com/2010/03/04/spencers-uhi-vs-population-project-an-update/
http://wattsupwiththat.com/2010/03/10/spencer-global-urban-heat-island-effect-study-an-update/
What Dr. Spencer said at the first URL above was:
“This graph shows that the most rapid rate of warming with population increase is at the lowest population densities. The non-linear relationship is not a new discovery, as it has been noted by previous researchers who found an approximate logarithmic dependence of warming on population.
“Significantly, this means that monitoring long-term warming at more rural stations could have greater spurious warming than monitoring in the cities. For instance, a population increase from 0 to 20 people per sq. km gives a warming of +0.22 deg C, but for a densely populated location having 1,000 people per sq. km, it takes an additional 1,500 people (to 2,500 people per sq. km) to get the same 0.22 deg. C warming. (Of course, if one can find stations whose environment has not changed at all, that would be the preferred situation.)”
Even relatively small populations (20 people) can give rise to detectable UHI increases. And if there is an airstrip and special heating arrangements (external pipes) as in some of the arctic stations, the UHI might be even more marked than in non-arctic sites without these.

September 22, 2010 10:47 am

I co-authored a paper on just this subject several years back:
http://climate.gi.alaska.edu/ResearchProjects/pages/AKpaper2.html
The Urban Heat Island Effect at Fairbanks, Alaska
N. Magee1, J. Curtis2, and G. Wendler2
(1) The Pennsylvania State University, University Park, PA 16802
(2) Geophysical Institute, University of Alaska Fairbanks, Fairbanks AK 99775
Abstract
Using surface observation comparisons between Fairbanks and rurally situated Eielson Air Force Base in Interior Alaska, the growth of the Fairbanks heat island was studied for the time period 1949-1997. The climate records were examined to distinguish between a general warming trend and the changes due to an increasing heat island effect. Over the 49-year period, the population of Fairbanks grew by more than 500%, while the population of Eielson remained relatively constant. The mean annual heat island observed at the Fairbanks International Airport grew by 0.4Ā°C, with the winter months experiencing a more significant value of 1.0Ā°C. Primary focus was directed toward long-term heat island characterization based on season, wind speed, cloud cover, and time of day. In all cases, minimum temperatures were affected more than maxima and periods of calm or low wind speeds, winter clear sky conditions, and nighttime exhibited the largest heat island effects.
You can download a pdf version of the paper at the above link.

Al Tekhasski
September 22, 2010 10:51 am

Actually, this “sort of experiment” is ongoing everywhere, but not to the rigor Tim F. is suggesting. For example, in Texas there are several pairs of stations that are about 50km apart, yet their 100-year trend is OPPOSITE. Mainstream climatology ignores this obvious sore by claiming “good spatial correlation” between stations in radius up to 1200km. Yes, I checked the correlation function between Ada and Pauls Valley stations, see
http://ourchangingclimate.wordpress.com/2010/05/21/scott-denning-to-iccc-heartland-%E2%80%98conference%E2%80%99-gathering-%E2%80%9Cbe-skeptical%E2%80%A6-be-very-skeptical%E2%80%9D/#comment-5761
It was about 0.77. However, what has escaped an attention of climate researchers (Hansen-Lebedeff) is that this correlation is made from high-amplitude inter-seasonal excursions, which are obviously well correlated, especially when 50km apart. The main parameter or our interest, the long-term trend, is however totally opposite. Similar pairs of stations with strictly opposite climatologically-long trends can be found everywhere – see more fun in the referenced thread.
Interestingly, this observation of inconsistency between uniform forcing and opposite trends in nearby stations does not have a traction in skeptical community either:
http://wattsupwiththat.com/2010/06/17/an-aggie-joke/#comment-414419
http://noconsensus.wordpress.com/2010/09/02/in-search-of-cooling-trends/#comment-35835

Al Tekhasski
September 22, 2010 10:53 am

I have a reservation that “UHI effect” can impose only uptrend. There could be also a downtrend. All would depend on location of a particular sensor relative to the center of heat island. If the sensor is situated on outskirt of the heat spot, convective pattern of airflow will draft surrounding air masses and bias the main wind pattern, and likely cause some cooling trend if the heat island grows over historical time. So, without actual measurements from oversampled grid of temperature sensors all these “UHI corrections” are likely a BS.

Rattus Norvegicus
September 22, 2010 10:58 am

A self refuting post! All you have to do look at the sat map to see that there is substantial high latitude warming.

September 22, 2010 10:59 am

I had a quick look at Svalbard Luft data, comparing summer & winter anomaly.
http://www.vukcevic.talktalk.net/SL.htm
Conclusion:
– Most of annual rise is due to rise in winterā€™s temperatures.
– Summerā€™s anomaly is not main contributing factor.
– Large winter oscillations are confirmation that AGW (CO2) cannot be factor.
– Low S-W correlation again confirms it is not systematic rise in warming
– The Arctic low ice levels are not due to Solar factor. More or less constant summers, when the sun is the main factor. Large oscillations in the winter when the sun is only a minor factor.
– Low ice levels in recent years are not due to excessive melt in the summer, but the wintersā€™ build up failure.
– Finally, it confirms my hypothesis that Arctic temps are controlled by the Arctic currents, since there is no alternative winter factor, for more details:
http://www.vukcevic.talktalk.net/NFC1.htm

AndyW
September 22, 2010 11:07 am

Circumnavigation of the globe this year in the Northern Hemisphere for the first time ever it seems.
What urban trends are in the NE ( Northern) and NW passages?
Andy

September 22, 2010 11:10 am

The correlation of AMO with the data is interesting. Suggest to me that the Gulf Stream is the primary mover of heat to the Arctic & when the Atlantic is cool , there isnt as much heat to move the Arctic & the Arctic becomes cooler (& vice versa as well).
Simple, yet potentially profound in undermining the AGW hypothesis. It also may explain why Antarctic ice volumes move independently of the Arctic – as the Gulf Stream would have no effect on the Antarctic

nc
September 22, 2010 11:20 am

Read this about the Russians building floating nuclear power plants for the arctic and developement. What will that do for UHI? http://www.bbc.co.uk/news/world-11381773

Stephan
September 22, 2010 11:25 am

Probably one of the best/more relevant posts on WUWT. Time for the people responsible for GISStemp to be replaced….

Nathan Schmidt
September 22, 2010 11:26 am

About the Cambridge Bay Environment Canada station: It is located at the airport, out of the frame (to the lower left) of the linked photograph. So, it’s away from the population centre, and also well away from any prop or jet wash, but there are other siting issues.
The station has been sited at at least two other locations over its lifetime, one of which was several kilometres to the east. The existing station is also located on a raised gravel pad a couple of metres thick, but the original station was on natural ground (tundra), introducing potential surface material / heat storage effects. Many northern stations appear to have been re-sited on these types of pads, but I haven’t had a chance to pursue any more details of when this may have occurred.
I’m not sure about practices at Cambridge Bay A, but at another northern station I’ve visited they seem to clear the snow from a similar pad, which could affect site temperatures during the spring snowmelt period (snow disappears earlier and low albedo gravel cover can absorb heat sooner).
There are some station photographs available on the web at http://commons.wikimedia.org/wiki/User:CambridgeBayWeather, and I also have some from a site visit several years ago.

AJB
September 22, 2010 11:30 am
Juho
September 22, 2010 11:38 am

Anthony, you have a station
Gmo Im.E. K. F (77.7 N, 104.3 E), Tamyr Peninsula, Russia
listed on both, urban and rural. Which it is?

John F. Hultquist
September 22, 2010 11:41 am

AndyW says:
September 22, 2010 at 11:07 am
ā€œCircumnavigation of the globe this year in the Northern Hemisphere for the first time ever it seems.ā€
We used to say paper doesnā€™t refuse ink. An update might be that the internet doesnā€™t refuse electrons.
Modern ships and technology allow folks to do many things now that they would not have thought of nor been able to do not many years ago. It took Lewis and Clark a couple of years to cross to the Pacific and back while today it is done in a day. What has ā€œclimate disruptionā€ to do with any of this?

Steven mosher
September 22, 2010 11:51 am

ed you cannot compare two charts with different anomaly periods.
Lets just take a simple example.
you have one series that goes from year 1 to year 15. It looks like this
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
Now pick the second three years and anomalize
-1 -1 -1 0 0 0 1 1 1 2 2 2 3 3 3 4 4 4
SEE? 4, 4,4 means that its 4 degrees warmer than the BASE period
Now you have a second period.. 9 years long.. the last nine years of the first
period. And Lets pick a series that matches the FIRST perfectly
3,3,3 4,4,4, 5 5 5
Now lets anomalize using the first three years
0 0 0, 1 1 1, 2 2 2
GOSH the second site didnt warm as much as the first !
Now lets do it the right way
1 1 1 2 2 2 3 3 3 4 4 4 5 5 5
*********3 3 3 4 4 4 5 5 5
And anomalize over the COMMON PERIOD
-2 -2 -2 -1 -1 -1 0 0 0 1 1 1 2 2 2
************* 0 0 0 1 1 1 2 2 2
That’s just basic. Thats why we use anomalies. I’ll finish reading, but its not off to a good start. You cannot compare two charts with different base periods. ANOMALY means anomaly from a period in time. different period different base, misleading answer

Ed Caryl
September 22, 2010 11:53 am

Ratus,
“self refuting”? Read the article again.
Yes, the high arctic has warmed in the satellite era. That only extends back to 1979, the beginning of the latest AMO upswing. In a few years that map will show cooling as the AMO swings back to cooling.

P Gosselin
September 22, 2010 11:53 am

Ratticus,
Why wouldn’t it show warming since 1979? Of course it has warmed in the Arctic since 1979 – that’s what the AMO shows too.

J. Knight
September 22, 2010 11:55 am

Al, Ada and Pauls Valley are in Oklahoma, not Texas.

John F. Hultquist
September 22, 2010 11:59 am

In all the posts and comments regarding the temperature of the northern Atlantic ocean I do not think I have ever seen mention of the contribution of the Mediterranean Sea outflow to the characteristics of the North Atlantic. But warm salty water does flow out and partly as a result of the Corolis force seems to mostly curl north. Below is a link to a 1979 paper that has been available on line since April 2003. [Note: An isopycnal is a surface of constant potential density of water.]
From the Abstract:
Title: On the contribution of the Mediterranean Sea outflow to the Norwegian-Greenland Sea — Joseph L. Reid
ā€œThe high salinity of the Mediterranean outflow extends along this isopycnal and contributes substantially to the salinity of the water passing northward into the Norwegian-Greenland Sea. It has been supposed previously that it is the upper waters of the Northeastern Atlantic Ocean that pass through this channel and contribute the high salinity of the Norwegian Current. From examination of the temperature, salinity, and oxygen of the various layers, it appears likely to be the Mediterranean core that contributes the characteristics of the Norwegian Current, . . . ā€
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B757K-48BD1XV-5J&_user=10&_coverDate=11/30/1979&_rdoc=1&_fmt=high&_orig=search&_origin=search&_sort=d&_docanchor=&view=c&_searchStrId=1470375267&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=a437f68612067fcfa690f35057a01f1e&searchtype=a

Evan Jones
Editor
September 22, 2010 12:09 pm

So, without actual measurements from oversampled grid of temperature sensors all these ā€œUHI correctionsā€ are likely a BS.
Well, I did just that, using the entire USHCN network, looking at trends for raw data from 1979 – 2008, splitting it into warming (1979 – 98) and cooling (1998 – 2008) trends. Urban/suburban/rural designations according to NASA.
I expected an accelerated urban warming trend (during the warming trend), followed by an accelerated cooling (during the cooling trend).
Instead, what I found was:
1.) During the 1979 – 1998 period, urban warmed faster than suburban warmed faster than rural. (As expected.)
2.) During the 1998 – 2008 period, urban cooled slower than suburban cooled slower than rural. (Not as expected.)
Conclusion: Urban Heat Island effect is Worse Than We Thought (WTWT).
And definitely “not BS”.

Phil Nizialek
September 22, 2010 12:11 pm

I’m not knowledgeable enough to assess this article from a statistical standpoint, but I agree that the “official” temperature data sets appear to be deeply flawed. Still, two aspects of this assessment leave me troubled. First, the satellite measurements indicate Arctic warming and correspond to a period of arctic ice decline. Secondly, I continue to see persistent, and apparently unrefuted, studies of Arctic tundra thaw during the same or similar periods. It seems that evidence of GISSTEMP measurement flaws throw no light on these real world observations of arctic temperature increase.
I realize that if there is a measurable increase in arctic temps over the past 30 years (and I think there is much evidence for this) that evidence is not necessarily the result of CO2 induced warming. Still, it seems a bit disingenuous to posit that what appears to be a very real increase is merely a measurement artifact when there is so much evidence to the contrary.

Steven mosher
September 22, 2010 12:17 pm

You chart of the average of all isolated stations ( the analysis segregating the stations was very weak) is done in absolute temperature. You cant do that unless all stations have data for the entire period. there has to be perfect overlap. Otherwise, if a cooler or warmer station drops out you get spurious results.
Also, your regression done from the start of the record is misleading. check out your hockey stick at the end. That of course is partially due to the “purple station”
see its the coldest. see its ONLY in the average for a short time in the middle.
THAT depresses your average. This is why we use anomalies.
There’s more: the segregation of urban and rural must be done OBJECTIVEY, with a criteria that others can apply. So nighlights: pixel on/pixel off. You can check my results. I looked at your pictures ( some were not as described) and I could not tell what criteria you used. not repeatable. subjective.
Finally, AMO. the last years look ominous for any kind of correlation analysis.
So, what you need to do with this nice start is the following:
1. fix the math errors. You cannot compare images with different base periods.
use anomalies. Do not fit regression lines to time series where there is KNOWN
autocorrellation without correcting for that. Estimate the uncertainty of slopes.
Dont ignore regime changes ( around the mid 70s)
2. establish an objective criteria ( like the CRN criteria, or nightlights, or building height, or population) Before you do your categorization. That criteria has to be
objective. Someone using only your criteria has to be able to duplicate your categorization.
3. when you argue that two time series are correlated, actually do the correlation study, don’t eyeball.

MarkA
September 22, 2010 12:21 pm

I have looked at a composite of the 22 first order sites in Alaska (mostly airports) and have found no warming over the past 33 years.
Contrast this with the New York Times story from 2002 which told us Alaska has warmed 5-10 degrees since the 1970s.

In Alaska, rising temperatures, whether caused by greenhouse gas emissions or nature in a prolonged mood swing, are not a topic of debate or an abstraction. Mean temperatures have risen by 5 degrees in summer and 10 degrees in winter since the 1970’s, federal officials say.

Steven mosher
September 22, 2010 12:24 pm

Hmm looks like my post on your mistake comparring GISS 1933-1963 with the satilite got eaten.
Basically, you cannot compare two charts with different anomaly periods. You will get the wrong answer.
the anomaly process sets the base period to ZERO. so by choosing 1933-1963 ( 31 years?) you set that period to ZERO. then your current figures are the increase since that period. the satillite record has a different base.
If you set 1933-1963 to 0, then look at today, suppose you found today was +1.
Thats +1 OVER the average of 1933-63. In the satilite chart the base period is the current period. So, OF COURSE it will be lower. by definition.
Too many mistakes in this.

Steven mosher
September 22, 2010 12:26 pm

Rattus Norvegicus says:
September 22, 2010 at 10:58 am
A self refuting post! All you have to do look at the sat map to see that there is substantial high latitude warming.
wore than that rattus they are different base periods, which means the sat map, which shows warming, will show less warming BY DEFINITION, ie different base periods.
sheesh.

Ed Caryl
September 22, 2010 12:33 pm

“Juho says:
September 22, 2010 at 11:38 am
Anthony, you have a station
Gmo Im.E. K. F (77.7 N, 104.3 E), Tamyr Peninsula, Russia
listed on both, urban and rural. Which it is?”
No, the urban one is Gmo I’m. E.T. A different station.

Al Tekhasski
September 22, 2010 12:40 pm

evanmjones wrote: “Well, I did just that, using the entire USHCN network”
Well, I believe I said “from _oversampled_ grid”. The fact that many pairs of stations have opposite climatological trends while being just 50km apart strongly suggest that Nyqiust-Shannon-Koltelnikov sampling conditions are not satisfied. Therefore the existing grid is _undersampled_ (by my estimations at least 100X), and no re-analysis of entire (or partial) existing networks can fix this fundamental deficiency. This is what physics says, sorry.
Also, my post clearly implies that you have to differentiate suburban stations with regard to their actual location with respect to heat island and prevailing wind direction before making any determination and conclusions. I don’t see this information in the station descriptions.
So, my conclusion is: UHI is MWTWT (= MUCH Worse Than We Thought ). And please note, I did not say “UHI is BS”, I said “UHI corrections” likely are. With “likely” being on the very certain side šŸ™‚

Navy Bob
September 22, 2010 1:05 pm

Fortunately, incandescent light bulbs will be a federal crime in the U.S. starting in 2012, thanks to the 2007 energy bill. So cozy warm Stevenson screen interiors ā€“ at least in Alaska – will soon be a thing of the past. Letā€™s hope less progressive Arctic nations will follow in our reduced-carbon footprints. Who says thereā€™s nothing we can do about global warming?

Honest ABE
September 22, 2010 1:23 pm

Yeah, measurement error has always been a big concern. Couple that with arrogance and an activist mindset and you get the “science” of global warming.

Steven mosher
September 22, 2010 1:30 pm

Well well well
looki what I found at Noaa
ftp://ftp.ncdc.noaa.gov/pub/download/reliability-of-us-temp-record-central-region-meeting.ppt
hmm. its got todays date. I though I saw this before.

P Gosselin
September 22, 2010 1:33 pm

Hey Mosh,
It isn’t like he was going to submit this to Nature. The essay is exploratory and is meant as to shine light on the issues of Arctic stations, and it brings up a host of legitimate points and observations that need to be looked into. It’s a launching pad.
Perhaps the professional way of doing this would be to send an e-mail, and not barge and shoot off as Mr. Peer Reviewer.
The real analysis is still a ways down the road.

Billy Liar
September 22, 2010 1:42 pm

Rattus Norvegicus says:
September 22, 2010 at 10:58 am
A self refuting post! All you have to do look at the sat map to see that there is substantial high latitude warming.
A self refuting comment!
All you have to do is look at the sat map to see that it is for 2009 only.

Al Tekhasski
September 22, 2010 1:46 pm

J. Knight says: “Al, Ada and Pauls Valley are in Oklahoma, not Texas.”
I guess being a proud resident of second largest state in the US, I must be considering all adjacent territories as Texas as well. My bad. šŸ™‚ But I also have examples of opposite trends near Canadian border, when I tried to reproduce the Hansen-Lebedeff correlations.

Editor
September 22, 2010 1:55 pm

Ed,
Prof. Ole Humlum’s site http://www.climate4you.com/ has a page specifically on UHI and has a series of studies on Longyearbyen (Svalbard) that is worth looking at.
It shows very well the effect of proximity to the open ocean warming the temperatures in Winter, as well as the effect of weather, town and airport.

Ed Caryl
September 22, 2010 2:01 pm

Steven Mosher
Yes, the base periods are different. No argument. But they are analogous. There is no satellite data before 1979, as you know. So I looked for an earlier period where the surface temperatures were similar. Note that the cooling spots on both maps are quite similar.
The average for the isolated stations was done by first normalizing each data set to the average temperature for all. This makes the “ends” where not all stations have data, a bit “noisy”, but it allows averaging without distortion.

Roger Knights
September 22, 2010 2:01 pm

Phil Nizialek says:
September 22, 2010 at 12:11 pm
Secondly, I continue to see persistent, and apparently unrefuted, studies of Arctic tundra thaw during the same or similar periods.

I’ve read here on WUWT a couple of times — but I can’t remember the places — that soot has caused snow to melt, removing its insulating effect and exposing the tundra to the sun. FWIW, here are a couple of WUWT-post extracts about the effect of soot on Arctic ice:

Soot influence since 1979ā€Øhttp://www.nasa.gov/topics/earth/features/warming_aerosols_prt.htm
Jeremy (18:29:01) :
Springer (16:46:32) :ā€ØI think Pielke is right on the money in this article:
http://wattsupwiththat.com/2009/08/21/soot-and-the-arctic-ice-%E2%80%93-a-win-win-policy-based-on-chinese-coal-fired-power-plants%E2%80%9D/
No source of black soot can make it to the south pole which handily explains why the antarctic interior isnā€™t warming.

Billy Liar
September 22, 2010 2:02 pm

Steven mosher says:
September 22, 2010 at 12:17 pm
So, what you need to do with this nice start is the following: …
I’m sure Ed Caryl will be very grateful for your lecture.

P Gosselin
September 22, 2010 2:04 pm

Satellites do not go back to 1933. Ed didn’t use the GISS 1950-1980 GISS baseline because that was a cool AMO phase. So he took the 1933-63 (warm AMO) period as a rough comparison to show that GISS probably is too warm.
Of course the 1979 -present satellite shows its warmer – right in line with the AMO trend. Who is disputing that?
Then he compared a set of rural stations with a set of “urban” stations and found there’s a difference. The methodology of selection is very important at later stages, but not so much when one is only in the process of forming a hypothesis – rough selection is where one starts out.
I don’t know anyone who forms a preliminary hypothesis using exact methodology. The first step is simple observation, which is what Ed has done. Are we not free to observe, and develop a hypothesis? And then look more closely, (which would be the next step)?

George E. Smith
September 22, 2010 2:08 pm

Let me guess,
Judging by the apparent color Temperature (as rendered on my screen), I presume that that Stevenson Screen “Owl Box” contains the same kind of 2800 Kelvin Long Wave Infra Red Radiation source that is used in those lab and 4-H club demonstrations of how CO2 heats the atmosphere.
But why don’t they place it outside the box, so that it can warm the local atmosphere, instead of the thermocouple inside the Owl Box ?

Steven mosher
September 22, 2010 2:15 pm

P Gosselin says:
September 22, 2010 at 1:33 pm
Hey Mosh,
It isnā€™t like he was going to submit this to Nature. The essay is exploratory and is meant as to shine light on the issues of Arctic stations, and it brings up a host of legitimate points and observations that need to be looked into. Itā€™s a launching pad.
Perhaps the professional way of doing this would be to send an e-mail, and not barge and shoot off as Mr. Peer Reviewer.
The real analysis is still a ways down the road.
##############
P.
where possible I always choose to make my comments in public. All knowledge is a work in progress, and I dont hold prior mistakes against myself or against others. a mistake is just a mistake. everybody makes them. It’s only when we pass into the realm of politics and religion and love affairs where people seem to want to hold onto them. I prefer things open.
In any case, I think the approach of looking at sites has merit.
HOWEVER, the method must be sound. There is a little tidbit I got from an Oke paper that might change the way people look at urban/rural.
basically, UHI is modulated by the surrounding rural environment.
Like so: city A has rural surroundings B. City X has rural surrounding ~B.
the UHI you see in A will differ from the UHI you see in X because the UHI effect
is driven by the urban landscape AND specific characteristics is the rural landscape
surrounding it.
Interested folks probably know the Oke paper I am talking about.

P Gosselin
September 22, 2010 2:26 pm

My last comment directed at Mr Peer Riviewer wannabe.
It’s not like Ed is funded, has a scientific laboratory that’s equipped with a super mainframe to get the right answer right off the bat. I think using just the resources of PC and a lot of time in the internet he did a pretty damn good impressive job getting this started on this. That merits a lot of credit. I hope other readers here will not be deterred in investigating and forming hypotheses of their own.
I guess there are people out there will always dump.

Steven mosher
September 22, 2010 2:27 pm

Ed Caryl says:
September 22, 2010 at 2:01 pm
Steven Mosher
Yes, the base periods are different. No argument. But they are analogous. There is no satellite data before 1979, as you know. So I looked for an earlier period where the surface temperatures were similar. Note that the cooling spots on both maps are quite similar.
###############
the point is if a climate scientist tried to argue that way ( “analogous”) we would be all over them for being imprecise. Point being, you have to be more rigorous than those you criticize. Anyways, now you would have to show that they are in fact “similar” how similar? what is the rms? etc etc.
“The average for the isolated stations was done by first normalizing each data set to the average temperature for all. This makes the ā€œendsā€ where not all stations have data, a bit ā€œnoisyā€, but it allows averaging without distortion.”
to prove there is no distortion
you should take the anomaly approach. further you regressed a line through points
with different variances. I’m assuming that your annual regression was done on the average for each year, you dont say. But if you have “noisy” ends and you supress that noise by doing an average and then regress on the average, you are not exactly showing the trend in the data, you are fitting a line to an average yearly figure.

George E. Smith
September 22, 2010 2:28 pm

“”” Maud Kipz says:
September 22, 2010 at 9:08 am
the R^2 value (the statistical significance for the trend) is very low, 0.023
This betrays a fundamental misunderstanding of statistics. The R^2 value is a measure of linear dependence between two (random) variables, and nothing more. A trend with extremely high significance may still have a low R^2 if the trend is non-linear or if the observations are noisy. “””
All of which is very nice; as are the computations of any other (completely fictional) branch of Mathematics.
But NONE of it, establishes ANY Physical cause and effect relationship.
You can carry out the very same statistical analysis on the numbers in your local telephone directory; and derive the same quantities; and it still means nothing; unless the average telephone number in the book, happens to be your telephone number.

P Gosselin
September 22, 2010 2:34 pm

Steve Mosher 2.15
Yes, of course you’re right. That’s the next step. Ed’s work should be viewed as a first step in the process. So admit it. His work merits a lot of credit, and that it’s a good start.
Of course, the next step is to do it scientifically. Can you fund him?

Evan Jones
Editor
September 22, 2010 2:34 pm

Therefore the existing grid is _undersampled_ (by my estimations at least 100X)
So we would need over 120,000 stations? That’s a lot of stations.

September 22, 2010 2:45 pm

Ed Caryl,
Appreciate your post. It was readable and logically constructed.
Thanks for helping my longish continuing education.
John

Steven mosher
September 22, 2010 2:56 pm

Ed
If you want to compare apples to apples, you need to select Giss base period to
1979-2009.
Then compare with the UAH
Giss average is .25C that is 2009 is .25C warmer than the average of all years from 1979-2009. UAH looks to match quite well if you do the analysis correctly.
your alaska hotspot vanishes
the cool spot in siberia matches
the cool spot in canada matches.
hot spot in africa matches
warm antarctic matches
cool patch south of south america.
Any you have to be careful because you are looking at TWO different measures
SST+Land
versus
Trop temperatures,
################
But still you do see that the spatial field displays coherence ( which why people who think undersampling is an issue dont get it. thats been studied to death. In the US, you need 600 stations to capture a climate trend of .05C per decade.. ) Folks can look up the station density studies conducted to size the CRN and the new modernized network if they think otherwise.. no links go look for yourself)
http://data.giss.nasa.gov/cgi-bin/gistemp/do_nmap.py?year_last=2010&month_last=8&sat=4&sst=1&type=anoms&mean_gen=0112&year1=2009&year2=2009&base1=1979&base2=2009&radius=1200&pol=reg

Steven mosher
September 22, 2010 3:06 pm

P>
“I donā€™t know anyone who forms a preliminary hypothesis using exact methodology. The first step is simple observation, which is what Ed has done. Are we not free to observe, and develop a hypothesis? And then look more closely, (which would be the next step)?”
the problem is its misleading. if you do the analysis the correct way you would put Giss on the SAME base period as the satillite. 1979-2009. Then you would look at the two maps for 2009.
When you do this, as I just posted, you see there is no problem to explain in 2009.
Now, the UHI analysis has issues of its own, mathematical and methodological.
So does the AMO analysis.
I dont buy everything that comes out of climate science unless I can check it myself and unless the analysis makes sense and is done in a clear replicable way. why would I suddenly change that ethic just because i happen to agree with the conclusions.
is UHI a problem? yup. is this the best way to show it? Nope. will we get better understandings by critcicizing each others work. yep, unless people insist I wear kid gloves. which I wont, so dont bother asking. for me this is just about numbers. no hard feelings and no pulling of punches even for friends or people I agree with. Sorry.

Glenn
September 22, 2010 3:51 pm

Stevensen screen showing in the picture on this website for Alert, Canada:
http://gaw.empa.ch/gawsis/reports.asp?StationID=61
http://gaw.empa.ch/gawsis/images/sites/61.jpg
But apparently the station operating from 1985 to present used thermistors:
http://gaw.empa.ch/gawsis/reports.asp
Meteorological Service of Canada sponsors the temperature measurements, I don’t know if GISS incorporates them.

Ed Caryl
September 22, 2010 3:57 pm

Steven mosher
If you look at the Isfjord Radio discussion in the article you will see an anomaly map using the base period 1989 – 2008. I could do the one you suggest, but I can’t post it here.

September 22, 2010 4:45 pm

Ed
Have you seen these U-tubes of Russian temperature records given at Heartland, which go some way towards quantifying UHI, by comparing temperature rises for different population densities?

http://www.youtube.com/watch?v=Dfew9lgzz5o
One point these films make, is how well just four temperature records serve as proxy for all 400+ from the whole of Russia.

Al Tekhasski
September 22, 2010 4:45 pm

evanmjones wrote: “So we would need over 120,000 stations? Thatā€™s a lot of stations.”
Sure it is. But I am afraid you might need more. The example of stations 50km apart having opposite long-term trends means that we don’t know what trend is in between, and what is around in the same proximity. It means that the sampling grid must be less than half of 50km, or 25km at most. Given the Earth surface as 5.1E+8 km2, the 25x25km grid coverage would need about 800,000 equally-spaced stations to determine the average trend. Of course, you can try this oversampling on smaller regional areas, and assume that oceans are more uniform (although we don’t know this either). But for lands alone you need about 220,000 stations to begin with.
More, we still have no idea if the 25x25km is enough to capture complexities of local micro-climates, so be prepared to another half-scale, which would quadruple the number of necessary stations. Without this uniform sampling grid of data it is not serious to discuss any mathematics of subsets or else. This is what physical science says. Sorry.

sky
September 22, 2010 5:48 pm

That different stations show very much different temperature “trends” is a data inconsistency not confined to the Arctic. But in very cold climates, where thermal energy variations translate into very much larger temperature variations than in the tropics, that problem is brought out in spades. Anomalization of the data cannot begin to cure the corruptive effects of man-made heat sources, station moves, etc. , or accurately compensate for uncertain datum levels in relatively short, often fragmented records. While the desire to determine the trend is understandable, the results obtained from fitting a linear trend of a few decades to the average “anomaly” should not be mistaken for a secular trend in any region.
Having done extensive determinations of the coherence between long records at neighboring stations throughout the globe, I don’t buy the “spatial coherence” shown by GISS’ global maps. These are largely the artifact of the 1200km smoothing radius. Until we get a better grip on both the spatial and temporal variablity of climate from really long uncorrupted records, no one can prescribe the “correct” method for determining the regional “trend.”
BTW, the island of Jan Mayen isn’t a part of Canada, but belongs to Norway.

September 22, 2010 6:34 pm

Steven mosher, Ed Caryl, Rattus Norvegicus, Billy Liar, P Gosselin: Hereā€™s a gif animation with four north polar stereographic maps. You can download them through the KNMI Climate Explorer. The datasets are GISS LOTI (1200km smoothing), GISS with 250km smoothing, RSS TLT, and UAH TLT. The base years for all are 1979-80, the beginning of the satellite period. And the anomalies are for calendar year 2009. In effect, the four maps are showing the rises in surface temperature and TLT anomalies since 1979-80.
Do these maps confirm or disagree with your earlier comments?
http://i55.tinypic.com/2eq7sy1.jpg

Ben D.
September 22, 2010 6:34 pm

I will not say that the anomaly approach is incorrect, but there are issues with it as well. As far as I can tell, its the best method known right now, but it is not perfect. To argue that its the end-all is kind of ignoring the issues that it also brings up.
To clarify, anomalies are based on the area-weighted global average, which if this is incorrect for an area, it kind of defeats the purpose of using this in the first place. To put it simply, you can use this approach in the data above and it will probably change what you see simply because of the transformation of the data so to speak. But that begs the question that this study/hypothesis/article brings up…. namely we see issues in the NON ADJUSTED data based on hypothesized UHI effects. I think there are a number of issues brought up here, and this paper could go in about 4 different directions, but I think saying that anomalies must be used is wrong in itself.
Does this approach agree with reality? (lets do a reality check) . . . . . We are not talking about a text-book case in statistics, this is a complex system we are discussing and there are other methods that I think could possibly out-perform the anomaly system, but without seeing any that do, I must agree its the best system we know of for normalizing temperature data.
But I must also interject here and say one thing: If everyone uses the same method and that is ALL they use, how would we know this method is actually “correct”. I might be playing devil’s advocate there, but at some point I will take a shot at adjusting the data myself and the first thing I would do is NOT use the anomaly system.

u.k.(us)
September 22, 2010 7:29 pm

Ed Caryl,
Nice post.
It clearly shows, as you note:
“The temperature as measured at stations isolated from any UHI is simply tracking the AMO.
Looks like an awfully good fit. There is very little, if any, global warming. We need to wait until the bottom of the next AMO cycle to get a decent reading of global temperature change. That will be in about 2050 if the AMO cycles as it has since 1850.”
========
Now, let’s figure out the larger and longer term cycles, this cycle is embedded in.
Or, let’s figure out the shorter term cycles, affecting this cycle.
Just some suggestions, to add to your fun šŸ™‚
I’ll say it again, nice post.

Pamela Gray
September 22, 2010 7:42 pm

AndyW, only young wet-behind-the-ears whipper snappers would say that something has never been done before.

Glenn
September 22, 2010 9:06 pm

Navy Bob says:
September 22, 2010 at 1:05 pm
“Fortunately, incandescent light bulbs will be a federal crime in the U.S. starting in 2012, thanks to the 2007 energy bill. So cozy warm Stevenson screen interiors ā€“ at least in Alaska ā€“ will soon be a thing of the past. Letā€™s hope less progressive Arctic nations will follow in our reduced-carbon footprints. Who says thereā€™s nothing we can do about global warming?”
Nah, incandescents for Stevenson screens are exempt. Scientific use and all, you understand.

September 22, 2010 9:11 pm

Mr. Mosher, you are highly critical of Mr. Caryl’s work in this post, and I take exception to that. What you fail to realize is that a plot of temperatures over time tells the story. Mr. Caryl’s graph labeled “Urban Stations” shows a definite upward trend in temperatures over the time period. In stark contrast, the graph labeled “Isolated Stations” shows zero or very little upward trend.
Mr. Caryl’s point is valid. UHI in the Arctic stations caused the upward temperature trends. No amount of fiddling with anomalies or gridding or other statistical methodology will invalidate that basic fact.
For those of us who work with vast amounts of data from manufacturing, often from different facilities around the world, this is nothing new. I would never perform the statistics you claim must be performed, simply to achieve an anomaly. If one temperature trend has a different average value than another, one can simply overlay the trend lines. My previous work was with data from oil refineries, usually several hundred refineries world-wide, each with thousands of data points for one year. The multi-refinery studies were conducted annually for several years, then every other year thereafter.
Kudos to you, Mr. Caryl. It will be quite interesting to watch the so-called experts try to manipulate the obvious in a manner that defends their status quo. This is devastating evidence that CO2 is innocent.

jose
September 22, 2010 9:23 pm

“Looks like an awfully good fit. There is very little, if any, global warming. ”
Thanks for the chuckle. “Awfully good fit” isn’t very precise, perhaps you could supply us with something more meaningful (you seem to preferentially report R^2 values). And just because the mean annual temperature of your selected Arctic stations “fits” the – wait for it – the AMO means no global warming? Wow. I bet the “urban” arctic stations (if there truly is such a thing) also fit the AMO. But you didn’t show that.
The sad part is this – amplified Arctic warming is a completely predictable phenomena (check the models!) associated with increased greenhouse gas concentrations, and its a function of decreasing sea-ice extents (less sea ice means warmer air temperatures). Yet nobody around here apparently trusts either the observations OR the models, which leads me to believe that it is close to hopeless trying to convince anyone otherwise. There is no UHI in the Arctic, and its warming, and its because of us.

September 22, 2010 9:36 pm

jose says:
“There is no UHI in the Arctic, and its warming, and its because of us.”
So why is the Antarctic rapidly gaining ice?

jose
September 22, 2010 10:22 pm

Smokey: Way to change the topic. But since you asked, the simple answer is “its complicated”. The longer answer is: (1) stronger circumpolar circulation opens up more polynas, leading to more ice growth (paper here) , (2) warmer air temperatures produce warmer waters, increased precipitation, and a freshening (i.e. decreased salinity) of the surface waters. This freshening causes increased stratification in the southern oceans, and a corresponding decrease in the amount of heat transport as convection is suppressed. Less heat transport results in increased ice. (paper here) .

david
September 22, 2010 11:46 pm

jose says:
ā€œThere is no UHI in the Arctic, and its warming, and its because of us.ā€
Well that settles it, no UHI in the Artic, a consensous of one, the science is settled.

Steven mosher
September 23, 2010 12:34 am

Roger Sowell says:
September 22, 2010 at 9:11 pm
Mr. Mosher, you are highly critical of Mr. Carylā€™s work in this post, and I take exception to that.
##################
did you take exception to the fact when I was highly critical of Mann or Jones?
did you take exception when a bunch of us spent a couple weeks on CA taking apart Parkers paper? or how about spending nearly 6 months now working through all the warts in GHCN? do you take exception to the last 2 weeks I’ve spent trying to put together a package so that people can look at ICOADS dat in its raw form so they can rip that apart? How about the last 8 hours tying in 185 data descriptions so that other people can work on data to find errors.
“What you fail to realize is that a plot of temperatures over time tells the story. Mr. Carylā€™s graph labeled ā€œUrban Stationsā€ shows a definite upward trend in temperatures over the time period. In stark contrast, the graph labeled ā€œIsolated Stationsā€ shows zero or very little upward trend.”
What you fail to realize is that there is no objective criteria for separating or categorizing those two “classes” frankly when I looked at the pictures I thought he got the categaories backwards in some cases. In some cases there was NO EVIDENCE that the site fit he catagory. So, I question the very premise namely
“here are a set of isolated stations” and here are a set of “urban stations”
Look The biggest problem in UHI studies, the biggest FLAW in studies that say there is NO UHI, is lousy metadata. I’ve been saying that for 2-3 years. So, when I see sloppy non repeatable categorization I am going to say it. So, when I found the CRN guide on NOAA’s site and pointed ANthony at it, the idea was this:
here is an objective standard. something you can say ” 10 feet away” or no trees within X feet. Ed’s criteria is….. well nobody knows. If I gave you the pile of pictures and asked you to sort them, chances are you cant. To be accurate, the criteria mist be stated in advance. they must be based in the physics of UHI, and they must be objective. Not ” look at how the town blends into the airport” Not, “look at the trucks close to stevenson screen”. THAT is not the kind of evidence that impresses me, or is applicable to other sites. further, the mathematical methods were not robust or even correct. Sorry, they are not.
“Mr. Carylā€™s point is valid. UHI in the Arctic stations caused the upward temperature trends. No amount of fiddling with anomalies or gridding or other statistical methodology will invalidate that basic fact.”
point number 1. Ed said that he subtracted the entire average from each series. that is a form of doing an anomaly. 2. he averaged stations together. that is a form of gridding. Its called unweighted gridding.
So, I guess you reject his work now. please. You are out of your depth. Its ok that you dont understand. keep quiet and we will not be the wiser.
“For those of us who work with vast amounts of data from manufacturing, often from different facilities around the world, this is nothing new. I would never perform the statistics you claim must be performed, simply to achieve an anomaly. ”
Sorry bud. My experience with manufacturing data is way vaster than yours and more important. See how silly that sounds. but I suggest you not try the game of who has worked with more data from more places for a longer period of time. The facts that you dont realize that that an aggregate average is a form of gridding and that Ed actually defended his work by saying that he did standardize by subtracting the global mean, shows me that I should not trust what you say about your skills.
“If one temperature trend has a different average value than another, one can simply overlay the trend lines. My previous work was with data from oil refineries, usually several hundred refineries world-wide, each with thousands of data points for one year. The multi-refinery studies were conducted annually for several years, then every other year thereafter.”
That probably WHY you missed the point I made about the noisy end points he discussed in his methods. and probably WHY you missed the point about autocorrelation and the point about changing variance. I’ll do a little toy problem to show you the problem.
Pipe A 10 10 10 10 10 10 10 10 10 10 10
Pipe B 5 5 5 5 5
Pipe C 1 1 1 1 1
Now take the average and draw the regression of all averaged. See the problem?
now its not this obvious in Eds case, but if you have sparse data, missing data,
you have to take care. its not at ALL like the the problems of hundreds of pipes with thousands of data points. AS ED NOTED the series are noisy at the ends. that is what you get with sparse data. And guess what happens to a regression line when end points are noisy.
now go ahead and apply anomaly methods to the problem above. See how you get the right answer. of course you do. Do you know why you use anomaly methods? well if you have 100s of pipes and thousands of data points you dont have to. temperature series aint like oil in pipes. But nice try.
Kudos to you, Mr. Caryl. It will be quite interesting to watch the so-called experts try to manipulate the obvious in a manner that defends their status quo. This is devastating evidence that CO2 is innocent.

Ben D.
September 23, 2010 1:48 am

“Its complicated”. I see that and I always think that means you have no idea what you are talking about and do not understand it whatsoever. If you understood it, you could explain without having to use a research link as a crutch since you do not understand it enough to teach us “simpletons” without resorting to said link.
All of those theories you post are the same worth as any other computer model. If you assert a wrong assumption, the model turns to garbage. Or like most people who are knowledgeable about it: Garbage in….garbage back at ya.
Or we can do this:
Assert: Increased warming seen in arctic temperatures must be caused by man.
Computer output: True.
Scientists: It is because of these physical properties that I came up with while smoking a joint that MUST be causing the data, because we have already proven the case in our computer models.
Environmentalist: Polar bears are going to die!
And all along you never prove the important issue. You state that the assertion must be true in the first line, and that because the computer recycles what it is told to say, then the assertion must be true. Here we have one possible interpretation that states why the arctic is warming quicker then elsewhere and it has nothing to do with warming temperatures, but namely warming thermometers.
Your entire posts on this subject are off-topic Jose. Here is a quick synopsis of your belief system that will one day fall when someone like me proves that you are wrong:
Like so:
Assert: Man is causing current warming.
Computer outputs: Thou programmed me to say it is, so it is.
Scientists: Man is causing warming because the computers says so.
Environmentalists: I will troll the internet and release more carbon into the atmosphere by using more bandwidth and more computer time because I am holier then thou! Even my poop does not stink! We shall apply carbon reduction only to non-believers because they belong in the lake of fire! Repent sinners!! The end of the internet is coming! (For you)

September 23, 2010 4:31 am

jose says:
“Smokey: Way to change the topic.”
Didn’t jose notice the three maps in the article showing the Antarctic? Or does his cognitive dissonance create a blind spot covering half the planet?
Jose’s provably wrong statements like “amplified Arctic warming is a completely predictable phenomena (check the models!) associated with increased greenhouse gas…” is a noob mistake here. Climate models are always inaccurate; if they could accurately predict the climate, there would be no debate. But as we know, they’re always wrong. The sensitivity numbers they’re programmed to come up with are preposterously high.
The Arctic is not ice free — but it has been ice free many times in the pre-industrial past. Arctic ice routinely melts, and it has nothing to do with human activity. If CO2 was the cause, Antarctic ice cover wouldn’t be at a record high.
Climate alarmists have bet on the wrong horse. They don’t understand that the real concern is extreme cold, which is the normal state of affairs on our usually frozen planet. A temperature decline of 6 – 10Ā°C will be a true climate catastrophe. It’s hard to grow crops under several hundred feet of glacier ice.

John Finn
September 23, 2010 4:47 am

1. UAH arctic trend is 0.47 deg per decade or ~1.5 deg warming since 1979.
2. A significant decline in Arctic ice extent has been observed over the past 30 odd years which has been particularly prevalent in the last decade.
Bearing these 2 facts in mind, it’s now worth noting that of the 9 ‘isolated’ stations listed only 4 have up to date records. Most of the others have none or very little data after 1990. So, we seem to be comparing ‘isolated’ trends up until 1990 with ‘urban’ trends up until 2009/10 and deciding that the ‘urban’ trends are greater. What a surprise!!
I haven’t looked at the 4 up to date ‘isolated’ records in any detail, but a quick comparison does suggest that Jan Mayen (isolated) and Eureka(urban) trends are quite similar over a common period (i.e. 1948-2009).
This highlights another problem with the analysis. You can’t simply take the whole of one record and compare the trend to another record of different length. It’d be like taking the whole of the CET record (since 1659), comparing it with the shorter UK record and deciding that Central England was warming at a much lower rate than the rest of the UK.
I think closer analysis might show the ‘isolated’ and ‘urban’ trends are much closer than one might think from this post.

Editor
September 23, 2010 4:50 am

RE:
Smokey says:
September 23, 2010 at 4:31 am
Climate alarmists have bet on the wrong horse. They donā€™t understand that the real concern is extreme cold, which is the normal state of affairs on our usually frozen planet. A temperature decline of 6 ā€“ 10Ā°C will be a true climate catastrophe. Itā€™s hard to grow crops under several hundred feet of glacier ice.
I would say the good news is that we are likely 1,500 years or so from such conditions. The bad news is that while such a decline is thought, by some, to be a gradual process it actually could happen, or at least part of it, quite rapidly. Several hundred feet of glacial ice would take thousands of years, however, the present green belt could migrate far south in short order.
The topic of this thread, Arctic isolated versus ā€œurbanā€ stations show differing trends, tends to again illustrate at present it could be said we really don’t have a clue as to what the actual global climatic temperature is let alone what it will be in 100 years. Anthony’s surfacestations.org project substantiates the above statement.

Peter Plail
September 23, 2010 5:40 am

Thank you for a rational explanation of the apparently anomalous heating in Northern latitudes. It is outrageous that the so-called professional climatologists didn’t “engage brain” before ringing the alarm bells.

Dave in Delaware
September 23, 2010 5:43 am

Thoughts on Anomaly Temperatures
Temperature is a PROXY for Energy.
The Energy content and the Energy transfer is what you really need to track. Calculating an Anomaly of High Energy air averaged with Low Energy air is only a rough approximation, even if the statistics are pristine. It takes more energy to change the temperature of Humid air.
Three examples where temperature anomaly is not telling the full story
* humidity
* surface
* radiant energy transfer
Humidity
You have probably seen the example (my excerpt from Max Hugoson post at WUWT)
http://wattsupwiththat.com/2010/06/07/some-people-claim-that-theres-a-human-to-blame/#more-20260
Go to any online psychometric calculator.
*Put in 105 F and 15% R.H. Thatā€™s Phoenix on a typical June day.
*Then put in 85 F and 70% RH. Thatā€™s MN on many spring/summer days.
Whatā€™s the ENERGY CONTENT per cubic foot of air? 33 BTU for the PHX sample and 38 BTU for the MN sample. So the LOWER TEMPERATURE has the higher amount of energy. …..Thus, without knowledge of HUMIDITY we have NO CLUE as to atmospheric energy balances.
———————– (end of excerpt)
So might a better anomaly track temperature in similar humidity areas? Tracking Phoenix with itself might be OK, but maybe we shouldn’t track Minneapolis even with itself, since summer vs winter humidity is significantly different. It has been suggested that Dew Point might be a better indicator than Tmin averaged with Tmax.
Surface
Surface temperatures on land are actually ‘near surface’ air temperatures 1 to 2 meters above ground. The energy flow has already started its trek toward space. Ocean temperatures, especially the ARGO floats, are more truly surface or sub surface measures (before the energy moves to the air). Heat Capacity (used to determine energy content) of liquid water does not change much with temperature, so ‘averaging’ warm and cold water is a smaller error than for dry vs humid air. Which is why OHC, Ocean Heat Content, has been suggested to be a better measure of the Earth’s warming or cooling. And finally, because liquid water has a much higher Heat Capacity than air, when energy moves from the ocean to the air (as in an El Nino) a temperature change in the liquid, gives rise to a larger temperature change in the air. So again, an Anomaly that averages land surface with ocean temps is another ‘apples to bananas’ comparison – both fruit, but different texture.
Radiant Energy Transfer
Energy transfer from Earth toward space begins at the true surface, the dirt, grass, pavement, etc. On a clear sunny day, the surface temperature of an asphalt parking lot can be much higher than the air above it (the measured air temperature is then another proxy of the surface). Radiant energy transfer from the surface toward space is proportional to the absolute temperature to the 4th power (T^4). As the average anomaly temperature changes linearly, the energy transfer changes to the 4th power. An anomaly that averages a 5 degC change in the Sahara with a winter time change in Siberia isn’t telling the full energy story.
I have toyed with the idea of an ‘anomaly correction’ for radiant affect, but have not actually worked it past the concept stage. The idea would be to take each location, adjust for Radiant Potential (temp to the 4th power), then compute a Radiant Anomaly on the transformed temperatures. The Radiant Anomaly might then let us compare the Sahara to Siberia in terms of the surface ability to shed heat. Sort of like the ACE energy metric for hurricanes, but applied to surface temperature.

Paul Hildebrandt
September 23, 2010 6:00 am

Murray Duffin says:
September 22, 2010 at 8:42 am
Barrow was served by ski plane in winter and float plane in summer until the airstrip was built, which was less than 20 years ago if memory serves. As I recall the airport matches a jump in the Barrow temperature record.
I flew in and out of Barrow in 1982 in a Wien Air Alaska 737, which is more than 28 years ago.

Tim F
September 23, 2010 6:21 am

I selected the stations that correspond to those warm grid squares, as well as other stations in the same latitudes.
I am curious as to how these stations were selected. I searched a bit (certainly no an exhaustive search) and found several other stations that were “in the same latitudes”. Was there some specific criteria used to select the other stations? Why not use all the “arctic” stations for comparison?
Ostrov Vize 79.5 N 77.0 E rural area 1951 – 2010
Danmarkshavn 76.8 N 18.7 W rural area 1951 – 2010
Clyde,N.W.T. 70.5 N 68.5 W rural area 2005 – 2010
Cambridge Bay 69.1 N 105.1 W rural area 1929 – 2010
Hall Beach,N. 68.8 N 81.2 W rural area 1957 – 2010
Cokurdah 70.6 N 147.9 E rural area 1939 – 2010
Hatanga 72.0 N 102.5 E rural area 1929 – 2010
Dudinka 69.4 N 86.2 E 20,000 1906 – 2010
Dzardzan 68.7 N 124.0 E rural area 1936 – 2010
Cokurdah 70.6 N 147.9 E rural area 1939 – 2010
Svalbard Luft 78.2 N 15.5 E rural area 1977 – 2010
Danmarkshavn 76.8 N 18.7 W rural area 1951 – 2010

John F. Hultquist
September 23, 2010 8:52 am

Lee Kington says: at 4:50 am
ā€œThe bad news is that while such a decline is thought, by some, to be a gradual process it actually could happen, or at least part of it, quite rapidly.ā€
I agree and think this idea needs more exposure. The AGW hypothesis has to invoke ā€œtipping pointsā€ to get a response from anyone because otherwise the changes would be so slow in coming as to be unnoticeable ā€“ even if the claims were true.
Cold periods, except when caused by volcanic eruptions,
http://www.suite101.com/content/the-year-without-a-summer-1816-a54675
have triggers not yet explained. However, this past April we had two early morning freezes. Cherries and apples lost all their fruit, walnuts lost all their leaves and re-leafed a month later and will survive. Three miles south of us and 500 feet lower there was very little damage. But in the wine grape region that begins 30 miles south of us the harvest is two weeks late. Now in late September one can look around and ā€œsee that all is normalā€ ā€“ well we still donā€™t have apples, cherries, or walnuts in our trees. So the point is not much has to change for local crop failures. Modern growing, shipping, and marketing practices in the USA compensates for such things so not too many folks notice.
I expect crop failures on the margins to increase if the World cools. History tells us this could proceed quite rapidly. It doesnā€™t tell us why.

George E. Smith
September 23, 2010 9:51 am

“”” Dave in Delaware says:
September 23, 2010 at 5:43 am
Thoughts on Anomaly Temperatures
Temperature is a PROXY for Energy. “””
Dave, I have for some considerable time pointed out, that even if it was possible to measure the true average global (surface) Temperature; (which it isn’t) , that we still would know nothing about the energy transfers; and the roughly black body Stefan-Boltzmann like fourth power relationship, is one part of that problem.
It is a trivial problem in calculus and trigonometry to prove that if the Temperature goes through any arbitrary, single valued continuous function (of time) cycle, whose average value is Tzero, that the average value of the instantaneous fourth power of that temperature function, is ALWAYS Tzero + deltaT.
Now a lot of folks love to point out that the earth surface is NOT a “Black Body”, so they argue that the fourth power thing is not valid.
Well the black body assumption does set a maximum for the amount of radiant cooling that can occur; and many surfaces have a sufficiently constant spectral Radiant emissivity over the range of LWIR wavelengths that can be present in the thermal radiation from that surface at prevailing Temperatures; that simply applying some average emissivity to the BB calculated value from the S-B formula is a respectable value for the actual surface radiant emittance.
Actually the deep oceans behave like a fairly good black body absorber; well a grey body to be pedantic, since the surface Fresnel reflectance is about 2% (normal) over a fairly wide specral range; and certainly over the solar spectrum range; and perhaps 3% over the full range of incidence angles. So the deep oceans would be fairly well characterized as a Grey body with 0.97 total emissivity.
Actual LWIR reflectances at typical ocean surface temperatures; are not quite so easy to figure but I would expect the BB (with emissivity of 0.97 would be quite close to reality for the oceans; which after all are 70 % of the total surface.
Employing (with caution) BB radiation theory to the problem also gives us some other inputs to the Green House Gas absorptin of surface emitted LWIR thermal radiation.
If the surface emissions are in fact roughly black body like, then it is known that the spectral radiant emittance at the spectral peak of that emission varies as the FIFTH power of the Temperature, and NOT the FOURTH; and then the Wien Displacement law moves that peak to shorter wavelengths (~3000/T microns), so the higher the surface Temperature, the further down the thermal radiation tail the CO2 absorption band (15 micron) is. The total captured energy still goes up with Temperature; but the fraction of the emission spectrum energy that is capoured goes down; and more of it escapes the atmosphere. The spectral peak which is about 10.1 microns for the global average Temperature of 288K (they claim) will move further into the atmospheric window also.
On the other hand for colder regions the Wien Displacement moves the thermal radiation peak closer to the CO2 15 micron band; but the surface Total radiant emittance goes down severely for the colder regions.
All of which supports my contention that it is the hottest driest mid day tropical desert regions, that do most of the real radiant cooling (land) . The polar snow and ice regions are quite ineffective in cooling the planet; but if the arctic ocean should become ice free, then the north polar region would become a better cooler for that part of the planet.
And of course although this note is all about radiant cooling; we never lose sight of the fact that the ocean regions do a heck of a lot of cooling via the evaporation/convection mechanism, transporting latent heat into the upper atmosphere.

Dave
September 23, 2010 10:13 am

Mosh>
I get what you’re doing here, and you’re spot on in your principles. I would suggest, though, that you’d have spent less time phrasing your initial post more politely – wearing kid gloves, in your words – than you have done since defending it from people who picked up on your tone and got (wrongly) defensive.
It is my impression that one of the drivers of the antagonism in climate science debates is that a lot of people on both sides think that the best way to tell someone they might be able to improve their work is to get right in their face and shout ‘YOU’RE WRONG’. It’s human nature to be defensive of our work, and whilst it would be nice if everyone was completely detached and unemotional, we have to acknowledge that they’re not. Wearing kid gloves is an entirely worthwhile thing on which to spend a small amount of effort.

George E. Smith
September 23, 2010 10:27 am

“”” Al Tekhasski says:
September 22, 2010 at 4:45 pm
evanmjones wrote: ā€œSo we would need over 120,000 stations? Thatā€™s a lot of stations.ā€
Sure it is. But I am afraid you might need more. “””
Well Al you must be new around here. If you had been visiting here more often, you would know that the general theory of sampled data systems is apparently quite unknown in “Climate Science” Institutions.
So your Nyquist Sampling Theorem is trumped by their Statistical Analysis, and probably the Central Limit Theorem as well. So long as they get the right r^2 value and proper trend line (with a slope error no more than +/-50%; or a 3:1 range) they don’t have to worry about undersampling.
But they are very good at what they call oversampling; which is creating a whole raft of phony values that nobody measured, on their computer. They can make as many grid points as they like; limited only by the size of the supercomputer that the tax payers bought for them. Well they don’t actually measure anything real at all those oversampled grid points. For some reason their computers are not able to go back and predict; excuse me that’s project, the actual values that would have been read at the handful of real actual global measuring stations. but they can interpolate somethign feirce.
So in climate science it is legitimate to core bore a single tree; and from those small sectors of that one dimansional sample ofr the three dimensional tree, in an even bigger forest; you can describe the complete climate history as to Temperature, wind, moisture, sunlight, humidity (maybe I alreadys aid that) and anything else you want to know; well but only for the age of the tree. And you can determine the age of the tree by doing a radio-carbon 14 C assay on some of those pieces of the extracted core. There might be other ways to tell the age of the tree and date the climate conditions; but they probably aren’t as reliable as 14 c assays.
And due to the coherence of anomalies, it is ok to measure the temperature in downtown San Jose California; and apply that Temperature value to the small town of Loreto about 1/2 way down the Baja, on the Sea of Cortez.
So they used to monitor the weather and climate of the entire arctic (north of +60 degrees) with just 12 total weather stations; now they have some totally huge number like 70-80 .
So I doubt that anybody is going to heed your request for 100,000 sampling locations.
And by the way; just in case you haven’t noticed these climate reporting stations get their daily Temperature from a min-max temperature reading; which gives you two samples during each 24 hour cycle; but since the diurnal temperature variation is not pure sinusoidal; there must be at least a second harmonic 12 hour periodic component presnt, so they fail they Nyquist criterion for the Time variable by at least a factor of two which means that the aliassing noise makes even the daily Temperature average value unrecoverable. So the spatial aliassing noise is just superfluous; which is why they don’t care about it.
But it is good to see somebody else with some understanding of sampled data systems.

September 23, 2010 10:34 am

A little OT but for anyone interested in the new release of the NCDC GHCN v3 beta dataset pop over to Digging in the Clay and have a look at the following thread
http://diggingintheclay.wordpress.com/2010/09/23/ghcn-v3-beta-part-1-a-first-look-at-station-inventory-data/
Verity and I will shortly be publishing Part 2 and Part 3 in a series of threads on the subject of how the GHCN V3 dataset differs from the previous GHCN v2 dataset.
If you are interested in an ‘advanced’ look at the V3 dataset (in a much more user friendly normalised database format than the usual text files), why not pop over to Climate Applications and have a look at the TEKTemp implementation of the NCDC GHCN v3 beta dataset by clicking on the following link.
http://www.climateapplications.com/TEKTempNCDC.asp

E.M.Smith
Editor
September 23, 2010 10:44 am

vukcevic says:
I had a quick look at Svalbard Luft data, comparing summer & winter anomaly.
http://www.vukcevic.talktalk.net/SL.htm
Conclusion:
– Most of annual rise is due to rise in winterā€™s temperatures.
– Summerā€™s anomaly is not main contributing factor.
– Large winter oscillations are confirmation that AGW (CO2) cannot be factor.
– Low S-W correlation again confirms it is not systematic rise in warming

Nicely done!
I’d suggest going one further and looking at it by month. When I did that for stations around the world, I found different months have different trends (rising vs falling) all over the place. Often the most ‘warming’ comes at the ‘shoulder’ months between seasons. Just the places where more affluence would let folks turn on the heater a bit earlier in the season or run it a bit later (or clear the snow of the runway more …)
Oddly, stations near each other often had the same months going in opposite directions.
For me, this wide divergence of monthly trend data was a lethal thing to the CO2 thesis as there is no way for it to selectively act by month, and by slight geographical shifts.
Lots of nice graphs here:
http://chiefio.wordpress.com/2010/04/22/dmtdt-an-improved-version/
Two of my favorite graphs from it are Hobart and Darwin (where Darwin is cooling)
http://chiefio.files.wordpress.com/2010/04/hobart.png
http://chiefio.files.wordpress.com/2010/04/darwin.png
The original work:
http://chiefio.wordpress.com/2010/04/15/dmtdt-climate-change-by-the-monthly-anomaly/
Australia got it’s own posting:
http://chiefio.wordpress.com/2010/04/18/australian-anomaly-walkabout/
Though one of my favorite graphs is Japan, from this posting:
http://chiefio.wordpress.com/2010/04/26/dmtdm-a-northern-view/
that looks at northern hemisphere countries and regions.
http://chiefio.files.wordpress.com/2010/04/210_japan.png
Where it looks like the trends alternate rising vs falling on alternating months… but with February and July nearly dead flat trendless.
Magical stuff this CO2 …

Maud Kipz
September 23, 2010 11:13 am

George E. Smith says:
September 22, 2010 at 2:28 pm
ā€œā€” Maud Kipz says:
September 22, 2010 at 9:08 am
the R^2 value (the statistical significance for the trend) is very low, 0.023
This betrays a fundamental misunderstanding of statistics. The R^2 value is a measure of linear dependence between two (random) variables, and nothing more. A trend with extremely high significance may still have a low R^2 if the trend is non-linear or if the observations are noisy. ā€œā€”
All of which is very nice; as are the computations of any other (completely fictional) branch of Mathematics.
But NONE of it, establishes ANY Physical cause and effect relationship.
You can carry out the very same statistical analysis on the numbers in your local telephone directory; and derive the same quantities; and it still means nothing; unless the average telephone number in the book, happens to be your telephone number.

I don’t understand your joke about branches of mathematics. But unless you’re using Karl Popper as a prescription not to think, you’d realize that an event having low probability under the assumption of no causal effect is (potentially) evidence of some causal effect.
Invoking Shannon-Nyquist, I think, is distracting. We’re trying to recover the first moment from spatial data, not perfectly reconstruct a temporal signal. At least please explain why the hypotheses of the theorem are satisfied in this case.

E.M.Smith
Editor
September 23, 2010 11:43 am

Al Tekhasski says:
evanmjones wrote: ā€œSo we would need over 120,000 stations? Thatā€™s a lot of stations.ā€
Sure it is. But I am afraid you might need more. The example of stations 50km apart having opposite long-term trends means that we donā€™t know what trend is in between, and what is around in the same proximity. […]
More, we still have no idea if the 25x25km is enough to capture complexities of local micro-climates, so be prepared to another half-scale, which would quadruple the number of necessary stations. Without this uniform sampling grid of data it is not serious to discuss any mathematics of subsets or else. This is what physical science says. Sorry.

Al, you may like this article where I look at some of the mathematical issues of sampling surface temperature. As topology is fractal (mountains, coastlines, etc.) the temperatures from them ought to also be fractal. A black pebble next to a snow melt stream will have quite divergent temperatures… Measuring a fractal gives different answers based on the ‘size ruler’ you use. And we’re measuring ‘climate change’ with a ruler who’s size constantly changes over time.
http://chiefio.wordpress.com/2010/07/17/derivative-of-integral-chaos-is-agw/
Ben D. says: I will not say that the anomaly approach is incorrect, but there are issues with it as well. As far as I can tell, its the best method known right now, but it is not perfect. To argue that its the end-all is kind of ignoring the issues that it also brings up.
Very well put. Also, there are many kinds of anomaly method and they have many different modes of failure. One of the simplest to ‘get’ is that of the splice artifact.
It doesn’t much matter if you use anomalies or not, when you take a station that warms 1 C as it grows, then in a later decade add a new station that warms 1 C as it grows, then in the final decade swap to a third station that warms 1 C as it grows. Tack them all together and you a 3 C “warming trend”. It matters not if this is done via averaging, direct splicing, or “homogenizing” using each to “adjust” neighbors.
The temperature series codes like GIStemp are FULL of that. And anomalies make it easier to have happen rather than harder. (No station has to reach unheard of record highs and call attention to itself…) I saw this effect in the region near Marble Bar Australia where an all time ever record was set back near the ’30s and never exceeded, yet the ‘region’ has a ‘warming trend’.
So that’s why I periodically anchor myself back in ‘real temperatures’ and why I start by looking at the profile of ‘real temperatures’. I’ve taken a great deal of flack from folks asserting that it is sheer stupidity to do that, and they are wrong. It is only stupid to think that they show accurately the temperature trend. Just as it is stupid to think that averaged anomalies show the accurate temperature trend if for no other reason than ‘splice artifacts’. IMHO, the ‘splice artifact rich’ nature of the ‘homogenizing’ done is a target rich environment in the temperature series codes.
But wait, there is more…
To clarify, anomalies are based on the area-weighted global average,
For one kind of anomaly…
The climate codes use a “grid / box to grid / box” anomaly. They have one set of thermometers in the box at the baseline and a different set now. This is horridly broken. It would be like me saying cars have gotten faster as my home “grid / box” had an average VW fastback in 1970 and has and average Mercedes SL now and the “max speed anomaly” has risen by 55 mph.
Yes, they take steps to mitigate the problem. But mitigation is not perfection. We are basically betting the global economy on the perfection of their mitigation and coding (and their coding is fairly sloppy.)
So I started by looking at plain temperatures, and found things that were not in keeping with AGW and CO2 theories. Then moved on to anomalies. But I wanted a more controlled beast. So I do anomalies only “self to self” for a given thermometer.
This brings up the issue of ‘baseline’. But you don’t need a baseline to do anomalies. ONE kind of anomaly is based on a common period of time, the baseline. But as Steve Mosher points out, you need a common time period in your baseline. And many thermometers don’t have it. So a whole load of ‘homogenizing’ and ‘splicing’ (that GISS calls joining) and box to box and infill and… well, junk… gets done to try to make a complete enough record to use a ‘baseline’ method. And IMHO it adds too much error to be usable to 1/10 C. But you can use non-baseline anomalies, such as First Differences. And I use one like that (but fixing an issue in First Differences that makes it fail on data with lots of gaps in it.)
To put it simply, you can use this approach in the data above and it will probably change what you see simply because of the transformation of the data so to speak.
BINGO! And that is what the temperature data codes like GIStemp do. They change the data. So we end up with the past cooling by whole degrees…

But I must also interject here and say one thing: If everyone uses the same method and that is ALL they use, how would we know this method is actually ā€œcorrectā€. I might be playing devilā€™s advocate there, but at some point I will take a shot at adjusting the data myself and the first thing I would do is NOT use the anomaly system.

You have it exactly right. The three major labs all use the same basic approach, data, and methods with all the same flaws. Any attempt to look at it from a different angle gets rocks thrown at you (though it does point up their flaws…). And yes, starting from the basic temperature data gives you the context to know when something is straying from reality.
http://chiefio.wordpress.com/2010/04/03/mysterious-marble-bar/
And a whole lot more in:
http://chiefio.wordpress.com/category/dtdt/

Briney Eye
September 23, 2010 12:15 pm

Re: “the warm glow inside the Stevenson Screen is just the color temperature of an incandescent light bulb”
I am a skeptic because I see lots of problems with the the data used by the “climate disruption” camp, but in the interest of complete honesty I feel compelled to point out an alternative source for the illumination.
The gentleman on the step appears to be taking a picture of the inside of the enclosure, and the light could very well be coming from the focus-assist LED. They can be quite bright. I took a very charming picture of my granddaughter last Christmas in which she is squinting because it, and well illuminated by that characteristic “warm glow”.

Paul Nevins
September 23, 2010 12:41 pm

Thanks for bringing back one of my favorite pictures ever, the Stevenson screeen at Verhojansk. I just love the light bulb inside. No spurious warming from that…

Paul Nevins
September 23, 2010 1:12 pm

Briney
I believe I have seen a picture from a different angle and it is in fact a light bulb.

George E. Smith
September 23, 2010 2:01 pm

“”” Briney Eye says:
September 23, 2010 at 12:15 pm
Re: ā€œthe warm glow inside the Stevenson Screen is just the color temperature of an incandescent light bulbā€
I am a skeptic because I see lots of problems with the the data used by the ā€œclimate disruptionā€ camp, but in the interest of complete honesty I feel compelled to point out an alternative source for the illumination.
The gentleman on the step appears to be taking a picture of the inside of the enclosure, and the light could very well be coming from the focus-assist LED. They can be quite bright. I took a very charming picture of my granddaughter last Christmas in which she is squinting because it, and well illuminated by that characteristic ā€œwarm glowā€. “””
Well I have never seen a digital camera on which the “Focus assist” LED was anything but a RED LED; and if you look more closely you will see there isn’t any light on the open door outside the Owl box. That certainly isn’t a Whie LED or even a Warm white LED, and it is definitely NOT the color of an AlINGaP yellow/Amber LED; which although extremely bright, is visually very close to a Sodium yellow (589.0/6 nm).
And hitting that exact yellow color is extremely difficult, since the human eye sees only about a 5 nm range of wavelength to be yellow. Outside of that it would be gold on the long wavelength end, and grellow on the short wavelength end; so making yellow LEDS with a standard colr is quite a chore so manufacturers aren’t going to mess with the color; unless somebody wants to pay for several tons of LEDs.
Long and the short is, it isn’t a camera LED.

George E. Smith
September 23, 2010 2:39 pm

“”” Maud Kipz says:
September 23, 2010 at 11:13 am
George E. Smith says:
September 22, 2010 at 2:28 pm
ā€œā€ā€ Maud Kipz says:
September 22, 2010 at 9:08 am
the R^2 value (the statistical significance for the trend) is very low, 0.023
This betrays a fundamental misunderstanding of statistics. The R^2 value is a measure of linear dependence between two (random) variables, and nothing more. A trend with extremely high significance may still have a low R^2 if the trend is non-linear or if the observations are noisy. ā€œā€ā€
All of which is very nice; as are the computations of any other (completely fictional) branch of Mathematics.
But NONE of it, establishes ANY Physical cause and effect relationship.
You can carry out the very same statistical analysis on the numbers in your local telephone directory; and derive the same quantities; and it still means nothing; unless the average telephone number in the book, happens to be your telephone number.
I donā€™t understand your joke about branches of mathematics. But unless youā€™re using Karl Popper as a prescription not to think, youā€™d realize that an event having low probability under the assumption of no causal effect is (potentially) evidence of some causal effect.
Invoking Shannon-Nyquist, I think, is distracting. Weā€™re trying to recover the first moment from spatial data, not perfectly reconstruct a temporal signal. At least please explain why the hypotheses of the theorem are satisfied in this case. “””
Item #1 Maud. ALL of mathematics is pure fiction; we made it all up in our heads out of whole cloth. To put it another way:-
There is absolutely nothing in any branch of mathematics which exists in the real universe. There are no Points, or lines, or planes etc those things area figment of our imagination.
The formula x^2 + y^2 + z^2 = a^2 does not provide for anything like 8 km high mountains on its surface.
So I wasn’t picking on any branch of mathematics; merely pointing out that mathematics has the properties that we build into it. It is useful for describing the behavior of MODELS; which it can do with great exactitude if we choose; but that does not mean those models actually replicate reality. Of course we hope they do to some extent; but it is the models that our mathematics describes; not the real universe. So no matter how close the correlation it still doesn’t prove causality.
I have described on several occasions an incident in the late 1960 – early 1970s where a mathematical model was presented, which produced the exact value of a fundamental Physical Constant (Then Fine structure Constant) to less than 2/3 of the standard deviation of the very best experimental measurement of the fine structure constant (at that time) a value that is known to parts in 10^8.
So if the mathematical derivation agrees with experiment to a part in 10^8, it surely must be correct; most people would think.
Well it wasn’t; the model had absolutley no input data or parameters from the Physical universe; it was purely mathematics; and subsequently it was shown that a slightly different version of the same quite fictitious model gave a value that was more than twice as accurate; and every bit as phony.
But as to the invocation of Nyquist-Shannon; which you dismiss as “Distracting”; you’re darn right it is distracting. Perish the thought that we should deal with any realities.
You say you aren’t interested in a reconstruction; just a trend. I’ll grant you that; I wasn’t too interested in a recosntruction either; in fact the only thing I was interested in was the average value of the sampled function; I didn’t even care about the trend; just the average value.
And as you know from your Nyquiest Shannon, the average value of the function is simply the DC component of the signal spectrum; and the Nyquist Criterion tells us that it we indersample a band limited singal by just a factor of two, then the reconstructed spectrum folds back all the way, to produce aliassing noise at zero frequency; which of course is the average value that was being sought.
So if you do a min-max twice daily sampling of the Temperature at a point; and the diurnal
temperature cycle is non sinusoidal but contains at least a 12 hour period second harmonic component or higher; then the min-max average Daily Temperature computaion is erroneous because of Nyquist.
so even without the need for a recosntructed original continuous signal; you can’t extract even the average value of a sufficiently undersampled signal. And of course the global spatial undersampling is just a joke; orders of magnitude under what is needed simply to get a correct average; not to reconstruct the original signal.
And no the Central Limit theorem cannot buy you a reprieve from a Nyquist violation.
By the way; in ordinary Euclidean Geometry a circle is simply a special case of an ellipse. There are alternative Geometries and in one of them a circle is not an ellipse; it is a special case of a Hyperbola; and like all hyperbolas it is an infinite sized object. And every possible circle happens to pass through the same two fixed points.
But every one of the historical Geometry Theorems of Euclid can be proved in this alternative geometry.; which doesn’t even have the capability of proving that any more than seven “points” exist.
So yes mathematics is fictional as are all of our scientific models; they only approximate the real world; but we can explain their behavior very well using our equally fictional mathematics.
Only Mother Gaia has a good enough model to be able to mimic reality; and she does it all the time; she always gets the right answer.

Walter Orr
September 23, 2010 5:17 pm

Ed,
I am a northern resident who has spent recent time in all of the Canadian arctic communities which you mention in this post. As such, your categorization of Inuvik, Cambridge Bay and Eureka as ‘Urban Stations’ beggars belief.
Inuvik’s weather station is located at the airport, more than 10 km from the town itself, and more than 250 m from the nearest heated building. There is No significant heat effect from the town on that weather station.
Cambridge Bay’s airport and weather station are more than 2 km from the Town itself, with the weather station located more than 80 m from the nearest heated building.
Eureka is a site established as a weather station, and primarily manned for that purpose. As you state with virtually never more than 20 people there, and normally less than 10. To argue that it is ‘urban’ is so ludicrous that it casts doubt on anything else you say, however much merit it may have.
Please don’t ‘sex up’ your point by using such poor examples. Your analysis will stand or fall on it’s own merits, and referring to these stations as urban does your argument a disservice.

Al Tekhasski
September 23, 2010 5:58 pm

George E. Smith wrote:
“And as you know from your Nyquiest Shannon, the average value of the function is simply the DC component of the signal spectrum; and the Nyquist Criterion tells us that it we indersample a band limited singal by just a factor of two, then the reconstructed spectrum folds back all the way, to produce aliassing noise at zero frequency; which of course is the average value that was being sought.”
Oh man, I am afraid you have touched the spot of knowledge that is completely foreign to climatologists šŸ™ (BTW, sorry to ask, is the term “climatard” acceptable on this board?). But we need to cut some slack here. I have seen professional engineers who were designing a temperature measurement system with 10Hz sampling rate, while the input had 50% noise level with spectrum up to several GHz and was completely unfiltered. No wonder they frequently saw some “unexplained” jumps in readings in tens of degrees C (!!!) (when the system would switch to a different program).
“So if you do a min-max twice daily sampling of the Temperature at a point”
The daily sampling of data could be ok. It could be interpreted as taking a half-turn Poincare map of the quasi-periodical trajectory (of weather). (Oh, again, this gobbledygook will be clearly unforgiven!). The trouble is that (a) they do not take the data precisely at the same time, and (b) most thermometers are of min-max type, they “remember” min and max, but do not remember when these min-max happened. My sympathy to glorious climate mathematicians who embark on such a mess. šŸ™‚
BTW, among all digicams I had, all had a yellow focus assist light. Go figure…
Cheers,
– Al Tekhasski

Al Tekhasski
September 23, 2010 8:01 pm

E.M.Smith says:
“Measuring a fractal gives different answers based on the ā€˜size rulerā€™ you use. And weā€™re measuring ā€˜climate changeā€™ with a ruler whoā€™s size constantly changes over time.”
Yep, exactly. Except it is not a ruler, it’s more likely a “wet finger”. Measuring fractals in practice is a non-trivial task, it is called “Hausdorff Dimension” in certain scientific circles far from climatology. Been there, tried that. Given substantial level of unrelated noise from instruments, it is nearly impossible. At least an attempt would require many “halving” of the ruler scale, a thing that is remotely acceptable in climatology.
It is well known that models with progressively finer grid scale could approach the level of details of realities, while crude sampling schemes are totally off. Precipitation patterns is one example. Here is another good example, about “disappearing glaciers”:
http://www.nichols.edu/departments/Glacier/impact_of_sampling_density_on_gl.htm
The initial impression was that the glacier’s annual balance was decreasing on the scale of decade, when they used their normal sampling density 12 sticks/km2. But when they re-calculated the balance using sampling density of 388 points/km2, the deficit in ice balance disappeared!!!. What if the sampling density would be 1200 points? Maybe the glacier was advancing, not retreating after all? This heretic thought was apparently rejected at the stage of peer review…
Also, do they monitor all 350 glaciers in Northern Cascade with 388pt/km2 sampling density? Hell no. If anything, only a handful of glaciers are monitored:
http://www.nichols.edu/departments/Glacier/quick_map_edited.jpg
Yet they arrived to unconditionally certain conclusion that global glaciosphere is shrinking like never before, all thanks to men.

Maud Kipz
September 23, 2010 8:52 pm

E. Smith:
My apologies. Reading your response I see I confused Al Tekhasski’s comments about spatial sampling with yours about temporal sampling. Temporal sampling is where Shannon-Nyquist most naturally applies. But consider that reporting the minimum and maximum is not equivalent to twice-a-day sampling. Sampling is occurring at a higher frequency and it is just the lowest and highest order statistics that are reported. In a similar (toy) situation, given iid samples from a uniform distribution on interval [a,b] and trying to estimate a and b, it turns out that the lowest and highest order statistics taken together are complete sufficient statistics. In other words, they’re all you need to know if you want to know a and b.

Al Tekhasski
September 23, 2010 10:24 pm

Maud Kipz wrote:
“Temporal sampling is where Shannon-Nyquist most naturally applies.”
Duh. The Shannon-Nyquist-Kotelnikov-Whittaker Theorem, a.k.a. the Cardinal Theorem of Interpolation Theory, or simply the Sampling Theorem naturally applies to any attempted interpolation of any smooth bandwidth-limited function regardless of particular physical meaning of its domain, and regardless of its dimensionality. It is a mathematical fact. Even if no lines or differentiable manifolds exist in Nature, Mathematics is a precise language that allows us to express relationships between objects and derive conclusions with absolute certainty.
“it turns out that the lowest and highest order statistics taken together are complete sufficient statistics. In other words, theyā€™re all you need to know if you want to know a and b”
Sufficient for what? Given that (a+b)/2 has no physical meaning and therefore no physical law could be used nor applied to “explain” its behavior, it is sufficient for nothing. That’s why here we are.

Al Tekhasski
September 23, 2010 11:26 pm

Dave in Delaware wrote:
“Temperature is a PROXY for Energy.”
Yes, the temperature is. However, “Global (average) Temperature” of a non-uniform heated globe in an outer space is a proxy for NOTHING.
Example:
Let a planet to have only two climate zones, 50% equatorial area with flat temperature T1, and 50% polar area, with T2. Consider the following sequence of temperatures (“climate change”):
(A) T1=295K, T2=172.8K
(B) T1=280K, T2=219.4K
(C) T1=270K, T2=236.9K
(D) T1=260K, T2=249.8K
The zonal temperatures in this sequence of “climates” are such that they give the same global OLR of 240W/m2, or no change in any energy fluxes has happened. Yet the ā€œglobal average temperatureā€ (T1+T2)/2 grew from 234K (case A) to 255K(case D), or increase of 21K , all without ANY change in radiative balance. So is your global average a proxy for energy? No.

George E. Smith
September 24, 2010 11:04 am

“”” Maud Kipz says:
September 23, 2010 at 8:52 pm
E. Smith:
My apologies. Reading your response I see I confused Al Tekhasskiā€™s comments about spatial sampling with yours about temporal sampling. Temporal sampling is where Shannon-Nyquist most naturally applies. But consider that reporting the minimum and maximum is not equivalent to twice-a-day sampling. “”””
Well Maud perhaps you should unconfuse yourself; because I made NO RESTRICTION of my comments to just Temporal Sampling; I’m in total agreement with with Al Tekhasski that the Nyquist sampling theorem applies to any multi-variable sytem of mathematics; whether it it is connected to any rreal physical ssytem or not. It is a property of mathematics; not of Physics.
In the case of Climate or weather science the Temperature (of the earth) can be considered to be a map diagramming in space and time what the Temperature is (everywhere) and at any time; well we typically limit it to surface or near surface temperature. That Temperature is a single valued continuous function of time and space; giving a unique value for any point on the surface for any instant of time. Now we can’t practically keep track of such a double infinity of information; so of necessity we have to sample it; and we have to sample it both in time and in space; and mathematically it doesn’t matter in what order we treat the two (in this case) variables.
In principle we can take a snapshot of the entire surface at any instant, and get a Temperature for any point. Now it is a simple integration problem to multiply each sampled Temperature by the elemental area for which it is cvalid (say one square km, or metre or even mm) and then add them all up, and divide by the total area of the surface; and Voilla !!, we have the average Temperature for the surface for that instant of time. So we repeat that one second later and get a new average, and so on, and then add those all up and divide by the total eleapsed time.
Well because of known quite cyclic variations over time; diurnally and over the course of a year; we would likely want to average a whole year of observations; to take into consideration expected variations due to orbital position and daily rotation and the like.
And of course it doesn’t matter the order. We choose to average at least daily for each and every sampling station; so we actually do the time averaging first. That doesn’t change the result if we do it properly; but we don’t because a min-max average temperature is only valid (in the Nyquist sense) for certain kinds of diurnal cyclic variations; and Nyquist itself says only for a single frequency sinusoidal cycle of Temperature since twice a day sampling is the minimum required for equidistant samples of a single frequency. Now generally a fixed regimen of 12 hourly sampling is still inadequate to reconstruct the complete cycle; that is a degenerate case; so we can’t recover the amplitude of the cycle; just the average. Min-max sampling of a sinusoid has the added benefit that it does recover the amplitude as well; even though we just want the average.
But perish the thought that it should warm faster in the morning, than it cools after sundown; which will introduce at least a second harmonic component; and don’t even talk about clouds passing by.
But I am on exactly the same page as Al here. NO ! we don’t need to be able to reconstruct the original continuous function; although in signal processing which is the most common application we would do that in effect just to get the average.
But you see in the case of the global temeprature; other than the diurnal cycle; we do in fact reconstrudct the continuous function.
That is the samples measured at each weather station; for daily average; when plotted on a map and smoothly interpolated; whcih is exactly what happens in any real communication network where sampled signals are transmitted and processed for recovery; they basically do an interpolation that conforms to rules well understood by signal processing engineers.
But the problem for climatology is that the violation of the sampling theorem with global temperature sampled data; makes even the global average unrecoverable;.
So we don’t have any idea what the global mean surface temperature is; and that is a failing of the sampling stratagem; it is not a limitation of our instrumentation or of mathematics (including statistics).
And note Al’s remonstration that the rules for sampled data processing are purely mathematical and involve no need for any connection to any real world phenomenon; it is strictly a (fictional) mathematical discipline, that has practical applicability to many real world systems.
And your Statistics likewise is a purely (fictional) mathematical discipline; that can be applied to any sets of data even totally fictitious made up sets of data; or real world numbers from some physical system (like the climjate).
And that is why I threw out the Telphone directory example.
I can make up a totally arbitrary set of data (numbers) and give it to you to do your Statistical wizardy on; and the results you get; are no less valid (ormeaningful) than a set of experimenta measured data values that might be obtained in the most well behaved well understood Physical experiment. And that is so BECAUSE your statistics is a property of a set of mathematical axioms on which statistical theory is built. Statistics is NOT a property of the Physical universe; but we can use it usefully to describe certain properties of that physical universe.
But always remember; is it Godel’s conjecture, that no formal mathematical system is complete in the sense that there will always be problems which can be meaningfully stated within the axioms of that mathematics; but which cannot be answered rigorously within those same axiomatic constraints. I believe it was his conjecture on “undecideability.”
The mathematics called “Projective Geometry; which is a plane Geometry has three very simple axioms.
#1 Two Points define a line (joining them)
#2 Two lines define a point (their intersection)
#3 There are at least 4 points.
That’s it. The four points are usually drawn at the corners of a kite, or the southern cross.
Using #1 you can construct the lines joining those four points; and because of #2, you will find that the three pairs of lines so drawn, will locate three more points. In fact that is the first theorem of Projective Geometry; that there are at least seven points.
Constrained by those axioms and the definitions of some procedures; it is impossible to prove there are any more points. There may be; probably are; but you can’t prove it within projective geometry.
Some will immediately object that parallel lines do not intersect at a point. Axiom #2 says they do; and adds that parallel lines do meet at a point on “The Line at Infinity.” Which conveniently one can always draw on the page.. Is it not true that two railway tracks merge to a single point on the horizon. The line at infinity is important in other ways, in that parabolas touch the line at infinity at two coincident points but ellipses never reach the line at infinity. Hyperbolas on the other hand intersect the line at infinity at two non co-incident points.
Therea re two special points on the line at infinity; and they are called the “Circular Points at Infinity”. So you can consider the line at infinity as defined to be that unique line that joins the two circular points at infinity. All circles (every one) pass through the circular points at infinity; so that means that circles are hyperbolas; not ellipses.
Within projective geometry; it doesn’t introduce any inconsistencies. But Godel still says there are undecideable problems within that confinement.

George E. Smith
September 24, 2010 11:47 am

“”” Al Tekhasski says:
September 23, 2010 at 5:58 pm
George E. Smith wrote:
ā€œAnd as you know from your Nyquiest Shannon, the average value of the function is simply the DC component of the signal spectrum; and the Nyquist Criterion tells us that it we indersample a band limited singal by just a factor of two, then the reconstructed spectrum folds back all the way, to produce aliassing noise at zero frequency; which of course is the average value that was being sought.ā€
Oh man, I am afraid you have touched the spot of knowledge that is completely foreign to climatologists šŸ™ (BTW, sorry to ask, is the term ā€œclimatardā€ acceptable on this board?). But we need to cut some slack here. I have seen professional engineers who were designing a temperature measurement system with 10Hz sampling rate, while the input had 50% noise level with spectrum up to several GHz and was completely unfiltered. No wonder they frequently saw some ā€œunexplainedā€ jumps in readings in tens of degrees C (!!!) (when the system would switch to a different program).
ā€œSo if you do a min-max twice daily sampling of the Temperature at a pointā€
The daily sampling of data could be ok. It could be interpreted as taking a half-turn Poincare map of the quasi-periodical trajectory (of weather). (Oh, again, this gobbledygook will be clearly unforgiven!). The trouble is that (a) they do not take the data precisely at the same time, and (b) most thermometers are of min-max type, they ā€œrememberā€ min and max, but do not remember when these min-max happened. My sympathy to glorious climate mathematicians who embark on such a mess. šŸ™‚
BTW, among all digicams I had, all had a yellow focus assist light. Go figureā€¦
Cheers,
ā€“ Al Tekhasski “””
The hell you say !! a Yellow focus light ? Well you did say digicams; I must be just looking at the cheap stuff.
Well yellow LEDs are really something else if they are the newest materials Technology. The earliest Yellow LEDs were high Phosphorous composition GaAs(1-x)Px material doped with Tellurium plus the iso-electronic trap; Nitrogen. The effect of the nitrogen was to localise the carrier recombination at the site of the nitrogen atom in the lattice; and that localisation, as a result of Heisenberg, made the momentum uncertain enough to spread across the band structure diagram far enough to reach the energy gap minimum and follow an allowed transition. Since GaP is an indirect band gap Material (like silicon) but GaAs is direct gap, GaP is an inefficient light source; without the proper doping. It’s the best everyday demonstration of the Heisenberg Uncertainty principle I know of.
But today they have AlInGaP; which makes an extremely bright Yellow, (and red) lamp possible some diodes have more than 50% external quantum efficiency; so half the input energy comes out as light, and only 1/2 or less is wasted as heat; kinda spooky really.
So I’ll drop my objection to yellow focus lights but notice they were observed on Digicams. I guess I will have to look at the focus light on my Panasonic digicam; never noticed it.
Never given “Clamatard” much thought; but I have thought that “Climatism” should be a real word.
Anthony has relatively few “do it my way” rules. Keep it clean; mean ain’t clean. Try to keep within some reasonable boundaries of thread topic. Charles the Moderator (Chasmod) doesn’t like to have to wake up every now and then reach for another encyclopedia; but he’s a good chap.
We don’t do typo reconstructive surgery, since everybody has that disease.
Other than that this is THE place to be; the girls are all very pretty; lots of trolls you can make pets out of. The garden of Eden must have been like WUWT.
George

Al Tekhasski
September 24, 2010 1:54 pm

George, “So Iā€™ll drop my objection to yellow focus lights but notice they were observed on Digicams. I guess I will have to look at the focus light on my Panasonic digicam; never noticed it.”
I just checked my Casio. The light is more like “orange”, although it is hard to conduct full spectral analysis by eye and assign a single moniker to a continuous function. I guess we can settle for “orange”. šŸ™‚

Ed Caryl
September 25, 2010 8:57 am

Walter Orr
Thank you for the on-site information. That confirms the location for Inuvik. The airport there is the source of the heat, and the situation is similar to that at Barrow, where the UHI has been studied. Cambridge Bay is another airport location, again, like Barrow. Eureka has it’s own problems. Every year the infrastructure is expanded and improved.

DirkH
September 25, 2010 10:20 am

jose says:
September 22, 2010 at 10:22 pm
“Smokey: Way to change the topic. But since you asked, the simple answer is ā€œits complicatedā€. […]”
Beautiful! Pure Gavin Schmidt!

September 29, 2010 3:29 am

E.M.Smith says:
September 23, 2010 at 10:44 am
“Often the most ā€˜warmingā€™ comes at the ā€˜shoulderā€™ months between seasons. ”
I looked at monthly trends on CET during decades where there was a yearly warming trend, and the months with the strongest warming trends are around the equinoxes. What came to mind is the increased connection with the solar wind at this time of year.

September 30, 2010 5:29 am

Interesting post, but you are mistaken when you write:

By GISS criteria, all the stations in the high Arctic are rural; there are no corrections for UHI.

although GISS should share responsibility for this mistake. When the GISS station data page returns “rural” for a station, this is based only on the original GHCN metadata, and does not take the nightlight radiance into account.
Barrow/W.Pos (radiance value 40) and Zyrjanka (radiance value 16) are both above the rural limit of radiance value 10, and so will also return a data set “after cleaning/homogeneity adjustment”. Kotzebue, Ral and Mys Smidta have radiance values of 10, just failing to make it out of the rural classification. Other high latitude stations with urban radiance values which are not included in your subset are:

Barter Island          ( 15, 70.13N, 143.63W)
Norman Wells           ( 16, 65.28N, 126.80W)
Igloolik,NW            ( 16, 69.38N,  81.80W)
Stykkisholmur          ( 23, 65.08N,  22.73W)
Akureyri               ( 23, 65.68N,  18.08W)
Bodo Vi                ( 25, 67.27N,  14.37E)
Stensele               ( 12, 65.10N,  17.20E)
Lulea Flygplats Sweden ( 71, 65.60N,  22.10E)
Karesuando             ( 12, 68.45N,  22.50E)
Alta Lufthavn          ( 47, 69.98N,  23.37E)
Hammerfest Norway      ( 28, 70.70N,  23.70E)
Haparanda              ( 41, 65.83N,  24.15E)
Karasjok               ( 18, 69.47N,  25.50E)
Rovaniemi              ( 29, 66.57N,  25.83E)
Kuusamo                ( 33, 65.97N,  29.18E)
Kandalaksa             ( 15, 67.15N,  32.35E)
Murmansk               (107, 68.97N,  33.05E)
Nar'Jan-Mar            ( 14, 67.63N,  53.03E)
Pecora                 ( 20, 65.12N,  57.10E)
Salehard               ( 13, 66.53N,  66.67E)
Mys Kamennyj           ( 11, 68.47N,  73.58E)
Dudinka                ( 65, 69.40N,  86.17E)
Hatanga                ( 12, 71.98N, 102.47E)
October 8, 2010 9:21 am

And here I have plotted the raw and adjusted records for Barrow:
Barrow (raw and adjusted). Trend over full record slightly reduced by adjustment. Trend over last 30 years considerably reduced by adjustment.

October 8, 2010 12:51 pm

The image (which appeared in the preview) seems to have been lost. A link:
http://oneillp.files.wordpress.com/2010/10/barrow2.jpeg