@NOAA data demonstrates that 2016 was not the ‘hottest year ever’ in the USA

Today, there’s all sorts of caterwauling over the NYT headline by Justin Gillis that made it above the fold in all caps, no less:  FOR THIRD YEAR, THE EARTH IN 2016 HIT RECORD HEAT.

nyt-record-heat-2016

I’m truly surprised they didn’t add an exclamation point too. (h/t to Ken Caldiera for the photo)

Much of that “record heat” is based on interpolation of data in the Arctic, such as BEST has done. For example:

But in reality, there’s just not much data at the poles, there is no permanent thermometers at the North pole, since sea ice drifts, is unstable, and melts in the summer as it has for millennia. Weather stations can’t be permanent in the Arctic ocean. So, the data is often interpolated from the nearest land-based thermometers.

To show this, look at how NASA GISS shows data with and without data interpolation to the North pole:

WITH 1200 kilometer interpolation:
2016-giss-1200km-interpolation

WITHOUT 1200 kilometer interpolation:

2016-giss-250km-interpolation

Here is the polar view:

WITH 1200 kilometer interpolation:

2016-polar-giss-1200km-interpolation

WITHOUT 1200 kilometer interpolation:

2016-polar-giss-250km-interpolation

Source: https://data.giss.nasa.gov/gistemp/maps/https://data.giss.nasa.gov/gistemp/maps/

Grey areas in the maps indicate missing data.

What a difference that interpolation makes.

So you can see that much of the claims of “global record heat” hinge on interpolating the Arctic temperature data where there is none. For example, look at this map of Global Historical Climatological Network (GHCN) coverage:

GHCN-paucity-stations-poles

As for the Continental USA, which has fantastically dense thermometer coverage as seen above, we were not even close to a record year according to NOAA’s own data. Annotations mine on the NOAA generated image:

 

2016-2012-conus-temperature

Source: https://www.ncdc.noaa.gov/cag/time-series/us/110/0/tavg/ytd/12/1996-2016?base_prd=true&firstbaseyear=1901&lastbaseyear=2000

  • NOAA National Centers for Environmental information, Climate at a Glance: U.S. Time Series, Average Temperature, published January 2017, retrieved on January 19, 2017 from http://www.ncdc.noaa.gov/cag/

That plot was done using NOAA’s own plotter, which you can replicate using the link above. Note that 2012 was warmer than 2016, when we had the last big El Niño. That’s using all of the thermometers in the USA that NOAA manages and utilizes, both good and bad.

What happens if we select the state-of-the-art pristine U.S. Climate Reference Network data?

Same answer – 2016 was not a record warm year in the USA, 2012 was:

2016-uscrn-annual-temperature

Source: https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&parameter=anom-tavg&time_scale=p12&begyear=2004&endyear=2016&month=12

Interestingly enough, if we plot the monthly USCRN data, we see that sharp cooling in the last datapoint which goes below the zero anomaly line:

2016-uscrn-temperature

Source: https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&parameter=anom-tavg&time_scale=ann&begyear=2004&endyear=2016&month=12

Cool times ahead!

Added: In the USCRN annual (January to December) graph above, note that the last three years in the USA were not record high temperature years either.

Added2: AndyG55 adds this analysis and graph in comments

When we graph USCRN with RSS and UAH over the US, we see that USCRN responds slightly more to warming surges.

As it is, the trends for all are basically identical and basically ZERO. (USCRN trend was exactly parallel and with RSS and UAH (all zero trend) before the slight El Nino surge starting mid 2015 )

2016-uscrn-rss-uah

 

 

Advertisements

292 thoughts on “@NOAA data demonstrates that 2016 was not the ‘hottest year ever’ in the USA

  1. One thing that I am confused on — I thought that NOAA added approximations of the Arctic temperatures based on land station data at the Arctic Circle. My assumption comes from the Thomas Karl report which changed the data adjustments. Most people focused on the difference between ocean intakes and bucket samples, but I thought that report also added the Arctic Ocean temperature assessments based on the land station data.

    Is this correct?

    • Could someone explain the eemian interglacial period please where co2 was half todays level and temperature was 8c warmer than today. This lasted for 30000 years. Makes this rubbish insignificant.

      • 8°C is an exaggeration. Parts of central and norther Europe were probably 2 ° warmer, but modelling suggests it was actually cooler than present at lower latitudes (Kaspar et al. 2005, doi:10.1029/2005GL022456). And the cause? Not CO2 – the model used in that study included CO2 but assumed pre-industrial. But greenhouse gas levels are only one of the influences on climate: Over time scales of 1000s to 10000s of years, changes in earth’s orbital parameters – the Milankovitch cycles – induce oscillations in climate on at least the same scale as the projected warming from GHGs. Do climate scientists even suggest otherwise? During the Eemian, the orbit showed greater obliquity and eccentricity, hence the difference in influence on high versus low latitudes.

        But as you noted, these cycles are rather slow – that period of relatively warm climate in northern Europe probably lasted 10-15000 years. The reason why the current ‘rubbish’ is far from insignificant is that we are talking about current global warming trends over land (particularly at high norther latitudes) greater than 0.2 degrees per decade. These seem like such small numbers on a time scale of a few decades, but even over a century they add up to the same sort of temperatures as seen at the peak of the Eemian. Sea level during the Eemian was 6-9 metres higher than present. Such a large change is something we might gradually adapt to if it takes a few thousand years to creep up on us, but over a few centuries? We had better *hope* the climate scientists have it wrong!

      • Dave,

        Over time scales of 1000s to 10000s of years, changes in earth’s orbital parameters – the Milankovitch cycles – induce oscillations in climate on at least the same scale as the projected warming from GHGs. Do climate scientists even suggest otherwise? During the Eemian, the orbit showed greater obliquity and eccentricity, hence the difference in influence on high versus low latitudes.

        But as you noted, these cycles are rather slow – that period of relatively warm climate in northern Europe probably lasted 10-15000 years.
        ______________________________________

        At this timescales you and your descendants will be dead and gone.
        ______________________________________

    • I’m not sure just where those arctic circle stations would be positioned. I spent one summer in the late 1950s at a small Indian village [Beaver, AK] right on the Arctic Circle (well three mi S). The village was also on the banks of the Yukon River right in the middle of Alaska. In the middle of June there was 24 hr daylight and most days were very warm – mid 70s or above. It was also very dry and dusty for lack of rain.
      My point is that any temperature data from there would reflect the local topography (flat!) and weather patterns and would likely be very different from those conditions on the Arctic Ocean or on the arctic ice itself.

      • Here’s one of the articles in the study:
        http://www.nature.com/news/climate-change-hiatus-disappears-with-new-data-1.17700

        The resulting integration improves spatial coverage over many areas, including the Arctic, where temperatures have increased rapidly in recent decades (1). We applied the same methods used in our old analysis for quality control, time-dependent bias corrections, and other data processing steps (21) to the ISTI databank in order to address artificial shifts in the data caused by changes in, for example, station location, temperature instrumentation, observing practice, urbanization, and siting conditions.

        The point being that I was under the impression that the Arctic was now part of the temperature data set.

  2. I use FollowThatPage to track changes to http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.D.txt (GISS’s U.S. surface temperature record). Just this morning I got an automatic email, alerting me to the fact that they’ve just changed it again.

    http://sealevel.info/2017-01-19_GISS_Fig.D.txt_change_alert.html

    From just looking at 1880-1882 vs. 2013-2015, it appears that GISS has cooled the past and warmed the present again, in the U.S. 48-State temperature record.

    • Other people are watching changes in GISS data. Every month, approximately 10 percent of the entire GISS historical station record are changed. This has been going on since 2008 at least. That is, out of a random sample of 40 stations, about 3 or 4 stations will show a complete revision of historical data, with various amounts over various years. Not all data is changed, and not by large amounts. Some missing data is backfilled, and some old data are deleted. The next month will have a different 3 or 4 stations with data that are altered from the previous month. Around December of 2012 there was a much larger revision, with many stations showing many more than the usual adjustments, with some recent year data being lost entirely. About 3 months later, some of the lost data was restored, and some other changes were reversed.
      Generally, the older data is cooled by some random amount, between 0.1 to 1 degree. The remaining stations show no changes in any old data, just the added monthly update.

      The GISS station monthly data were taken from the GISS page, saved in the original space separated text format.
      List of stations in the sample.
      Akureyri, Amundsent-Scott, Bartow, Beaver City, Byrd, Concordia, Crete, Davis, Ellsworth, Franklin, Geneva, Gothenburg, Halley, Hanford, Hilo, Honolulu, Jan Mayen, Kodiak, Kwajalein, La Serena, Lakin, Loup City, Lebanon MO, Marysville, Mina, Minden, Mukteshwar, Nantes, Nome, Norfolk Island, Nuuk, Red Cloud, St. Helena, St. Cloud, Steffenville, Talkeetna, Thiruvanantha, Truk, Vostok, Wakeeny, Yakutat, Yamba. Most station data were saved in June 2012. Only the 4 Antarctic stations are saved from 2008.

      • Every month, approximately 10 percent of the entire GISS historical station record are changed.

        I think they have a batch process that makes the adjustments, but updates based on other stations, every time they add new stations, they’d need to rerun it, and it would make slightly different calculations. I think Mosh mentioned BEST does something like this.

      • Zeke has said that all stations are constantly updated and historically readjusted for the data sets he works with, possibly on a daily basis?
        Not sure how this fits in with only 10% of GISS on a monthly basis. Will get back when I have time.

      • And not a single one of them will explain the algorithm they use to adjust for UHI in tiny little towns of 6,000 people. These guys pretend a town of 6,000 is rural and doesn’t receive any noticeable UHI. But I live in a 6,000 person town surrounded by farmland, and in the middle of winter we see from 3 to 12 degrees Fahrenheit UHI in the middle of town compared to just 1 mile away outside of town. So the temp can be 0 degrees in the middle of town and it’s -12 just outside of town. Then just three days later the UHI might only be 3 degrees different. How in the world do Mosher and the boys adjust for this correctly? If they say they adjust by nearby measurements, well aren’t those also effected by UHI? My town of 6,000 is considered rural, yet UHI is very noticeable on cold mornings.

      • BW said: “Generally, the older data is cooled by some random amount, between 0.1 to 1 degree. The remaining stations show no changes in any old data, just the added monthly update.”

        So then, every month the warming trend is adjusted to show a higher trend? It’s always up? Do you have a revision history to show this… since 2008? That would be around 120 consecutive adjustments always in the same (warming) direction.

    • Dave, you and AW – “we were not even close to a record year according to NOAA’s own data. ” Changed and miss reported, Not even close using NOAA’s own reports. Yearly update time.

      (1) The Climate of 1997 – Annual Global Temperature Index “The global average temperature of 62.45 degrees Fahrenheit for 1997″ = 16.92°C.
      http://www.ncdc.noaa.gov/sotc/global/1997/13

      (2) http://www.ncdc.noaa.gov/sotc/global/199813
      Global Analysis – Annual 1998 – Does not give any “Annual Temperature” but the 2015 report does state – The annual temperature anomalies for 1997 and 1998 were 0.51°C (0.92°F) and 0.63°C (1.13°F), respectively, above the 20th century average, So 1998 was 0.63°C – 0.51°C = 0.12°C warmer than 1997 >> 62.45 degrees Fahrenheit for 1997″ = 16.92°C + 0.12°C = for 1998 = 17.04°C

      (3) For 2010, the combined global land and ocean surface temperature tied with 2005 as the warmest such period on record, at 0.62°C (1.12°F) above the 20th century average of 13.9°C (57.0°F). 0.62°C + 13.9°C = 14.52°C
      http://www.ncdc.noaa.gov/sotc/global/201013

      (4) 2013 ties with 2003 as the fourth warmest year globally since records began in 1880. The annual global combined land and ocean surface temperature was 0.62°C (1.12°F) above the 20th century average of 13.9°C (57.0°F). Only one year during the 20th century—1998—was warmer than 2013.
      0.62°C + 13.9°C = 14.52°C
      http://www.ncdc.noaa.gov/sotc/global/201313

      (5) 2014 annual global land and ocean surfaces temperature “The annually-averaged temperature was 0.69°C (1.24°F) above the 20th century average of 13.9°C (57.0°F)= 0.69°C above 13.9°C => 0.69°C + 13.9°C = 14.59°C
      http://www.ncdc.noaa.gov/sotc/global/2014/13

      (6) 2015 – the average global temperature across land and ocean surface areas for 2015 was 0.90°C (1.62°F) above the 20th century average of 13.9°C (57.0°F)
      => 0.90°C + 13.9°C = 14.80°C
      http://www.ncdc.noaa.gov/sotc/global/201513

      Now for 2016 and they report average temperature across the world’s land and ocean surfaces was 58.69 Fahrenheit 14.83°C
      https://www.washingtonpost.com/news/energy-environment/wp/2017/01/18/u-s-scientists-officially-declare-2016-the-hottest-year-on-record-that-makes-three-in-a-row/?utm_term=.31f17d68fcf5#comments

      So the results are 16.92 or 17.04 << 14.52 or 14.52 or 14.59 or 14.80 or 14.83 using data written at the time.

      Thanks to Nick at WUWT for the original find.

      Which number do you think NCDC/NOAA thinks is the record high. Failure at 3rd grade math or failure to scrub all the past. (See the ‘Ministry of Truth’ 1984).

      • Regarding the numbers posted by DD More on January 19, 2017 at 1:22 pm:

        1997 temperature 16.92°C, anomaly .51 C means the 20th century average as of then was 16.41 C. The 20th century average would also have to be 16.41 C in order for the 1998 anomaly to be .63 C and the temperature to be 17.04 C.

        But for 2010, 2013 and 2014 the 20th century average was stated as 13.9 C.

        So, I did a bit of factchecking. I looked at he first link mentioned by DD More above, http://www.ncdc.noaa.gov/sotc/global/1997/13
        One thing I saw there was: “Please note: the estimate for the baseline global temperature used in this study differed, and was warmer than, the baseline estimate (Jones et al., 1999) used currently. This report has been superseded by subsequent analyses. However, as with all climate monitoring reports, it is left online as it was written at the time.”

        Anomalies are easier to determine than absolute global temperature, as mentioned in https://wattsupwiththat.com/2014/01/26/why-arent-global-surface-temperature-data-produced-in-absolute-form/

  3. Using USCRN, 2012 was an anomaly, not part of any multiyear trend.
    The USCRN data are reported in degrees Fahrenheit.
    Now, plot the entire monthly USCRN temperature record in kelvins, with statistical confidence intervals (error bars) of +/- 0.1 on a Y-scale with a range of 20 kelvins. Certainly no climate change in the entire USCRN record. Current temps are no different from the zero centered mean. The slope of the entire least squares trend line is not significantly different from a line of zero slope. Not that those facts are significant. USCRN stations came online over the period of 2002 to 2006, so only 10 years of full data.
    Also, Alaska and Hawaii should be excluded from the lower 48 due to being part of different geological climates. Why include polar and tropical climates into a larger continental temperate zone?? It’s deceptive.

    When the USCRN reaches 40 full years of data, it will become a reasonable proxy for Northern Hemisphere temperate zone surface temperature trends.

    • When we graph USCRN with RSS and UAH over the US, we see that USCRN responds slightly more to warming surges.

      As it is, the trends for all are basically identical and basically ZERO. (USCRN trend was exactly parallel and with RSS and UAH (all zero trend) before the slight El Nino surge starting mid 2015 )

      • Degree C per decade trends if I am eyeballing the linear trends in AndyG55’s graph correctly, and if these linear trends are correct:

        USCRN: .46 degree C / decade
        UAH USA48: .18 degree C / decade
        UAH USA49: .3 degree C / decade
        RSSUSA: .12 degree C / decade

        This sounds like supporting a contention that satellite-measured lower troposphere temperature trend underreports surface warming.

        Something else notable: USCRN spikes more than the satellite-measured lower troposphere over the US does, and the biggest two spikes in USCRN are not from El Ninos. And in early 2010, US temperature was running below or close to the trend lines, while global temperature was at its second highest during the period covered due to an El Nino. Smoothing this graph by 5 months will improve correlation with global temperature and ENSO activity somewhat, but it will still show USA as not correlating very well with the world over a 12 year period. Also notable is that the US covers 1.93% of the world’s surface, which can explain why US temperatures do not correlate well with global temperatures over a period of only 12 years.

      • “Degree C per decade trends if I am eyeballing the linear trends”
        Yes, the trend for USCRN of 0.0468 °C/year = 4.68 °C/Cen is marked on the graph, and is very fast warming indeed. And a lot more than the satellites.

      • Only a TOTAL MATHEMATICAL ILLITERATE extrapolates a short term small trend out to a century trend.

        But it is Nick, so that is to be expected.

      • “Only a TOTAL MATHEMATICAL ILLITERATE extrapolates a short term small trend out to a century trend.”
        Only a mathematical illiterate would fail to understand that 0.0468 °C/year and 4.68 °C/Cen are simply two exactly equivalent of expressing one number, and it is the number written on your graph. The latter merely uses more familiar units.

        If you are sprinting at 20 miles per hour, it doesn’t mean you are going to run 20 miles in the next hour.

  4. ONLY in climate science do we have made-up data. If this were a new drug the researchers would have their license to practice permanently revoked. I’ve done research and published. Making up data, even for my small study would have been grounds for dismissal and revoke of three licenses to practice, plus barred from public employment for life.

    • So Pamela, if you think the data is “made up”…. you will easily be able to point me to a study that agrees with you. Till then you are just blowing hot air and we have enough of that.

      • Smudging and smearing a measurement taken in one area and assigning it to an unmeasured area is making data up. I took direct measurements along the 7th cranial nerve. So let’s say that because I only used 6 subjects, I decided to make it look like I had more. Since in general the brain stem portion of the 7th cranial nerve is similar from one human to another, I decided to assign an actual measurement to a “similar” human subject but didn’t actually measure that similar human. That would have been all I needed to do to end my career over and out. So why do climate scientists get a pass with the Arctic?

      • ” So let’s say that because I only used 6 subjects”
        Already you are sampling, and that is what is happening here. You actually want to know what effect your drug will have on the entire target population. CS want to know what the temperature is of the whole earth. You can’t test everyone, so you choose a sample. You propose that will tell you about the population, with known uncertainty. CS look at a sample of Earth temperatures, and propose that will tell them about the places unsampled, again with uncertainty. The processes are exactly analogous. You have sampling so built in to your science that you don’t even think about it. But you think it is improper in CS. Why?

        The scientists aren’t inflating their sample. They are doing what you do routinely – working out a population average from a sample average.

      • Pamela is correct. They are imputing data. Contrary to what stokes claims, it is not sampling at all. Imputing data is serious business with text books written on the subject. Spatial smoothing as a means to filter away high frequency noise (and signal) is not necessarily wrong, but in this case smoothing for the purpose of imputing data is laughably effed up. Sorry, but given the kind of complexity evident in the Earth’s temperature in the samples and the gigantic swathes of missing at the poles, data is not something you can just in-fill with some half-baked frankensteinian spatial smothing kernel to give you some uniformly roasting anomolies. Total junk product.

      • Nick Stokes, I live in a 6,000 person town surrounded by farmland, and in the middle of winter we see from 3 to 12 degrees Fahrenheit UHI in the middle of town compared to just 1 mile away outside of town. So the temp can be 0 degrees in the middle of town and it’s -12 just outside of town. Then just three days later the UHI might only be 3 degrees different. How in the world do you get an accurate measurement, AND NOT MAKE UP DATA and get the correct temperature, (0 or -12) and 3 days later was it (6 or 9)? If you pick it by nearby measurements then you are making up data. Pamela is 100% right. And you might want to know UHI is pretty big in tiny rural towns even though you and your buddies want to pretend rural towns really don’t get UHI.

      • “How in the world do you get an accurate measurement”
        Who is claiming to create an accurate measure in your town? What they do want is an estimate of the average anomaly in your region, which will then go into the global average. Not the temperature, the anomaly. If your town is having warm weather, the countryside probably is too. UHI doesn’t change that.

      • 100% wrong Nick Stokes. How are you supposed to know the anomaly when you don’t even know if it was -12 degrees or 0 degrees. That is a huge difference. My town went from 1000 people in 1900 to 6000 today. Most likely the anomaly you think you are measuring is actually UHI. Maybe you should try doing some field work and taking actual measurements and figuring out what UHI is doing to the actual record. So what is your algorithm to correct for UHI in towns of only 6,000 people? I hope you are correcting for that 12 degrees of UHI you are measuring.

      • “How are you supposed to know the anomaly when you don’t even know if it was -12 degrees or 0 degrees.”
        The anomaly is the difference between what you measured that month, and some average of the history of monthly readings at that site. Both of which you know. The temperature may be affected by UHI, altitude, hillside placement, whatever. The anomaly calculation is the same.

      • Yes, and in 1900 my town had 1,000 people and today it has 6,000 people, So you are comparing 2017 with an extra 12 degrees due to UHI with 1900 that may have had an extra 3 degrees due to UHI and saying the anomaly says it’s warmer by 9 degrees even though the temp was actually the same and UHI caused all the difference. How in the world are you possibly correctly adjusting for that?

    • Nick Stokes January 19, 2017 at 9:46 pm
      You can’t test everyone, so you choose a sample. You propose that will tell you about the population, with known uncertainty. CS look at a sample of Earth temperatures, and propose that will tell them about the places unsampled, again with uncertainty. The processes are exactly analogous.

      No, you could tell us the average temperature of the earth based only on sampled sites. This is much different, than extending the measurements from sampled sites to unsampled sites and then claiming that average temperature is a weighted average of sampled sites + unsampled sites. Worse yet, the result is widely reported as if it is the actual average from sampling.

      The unsampled result is an “inferred” average temperature. You are using inference and assumptions to change the average or your sample into something other than the average of the sample.

      • “No, you could tell us the average temperature of the earth based only on sampled sites.”
        That is exactly what they do. They don’t actually create “unsampled sites”, though it wouldn’t matter if they did. GISS has a rather complicated way of averaging over regions (boxes) and then combining those averages. But the end effect, however you do it, is an area-weighted average of the original data.

  5. The map of stations makes me wonder: is the distribution close enough to that of people for a simple arithmetic average of station temps to approximate a population density weighted average? Even if it doesn’t, the latter is surely an interesting statistic: the temperature people are actually feeling.

    • Which is precisely what happens every time the PDO goes into a warm phase. That goes double for the year or so following a major El Nino.
      As always, the trolls try to take a perfectly normal event and try to use it to prove their pet religion.

      • The Central American Seaway closed several million years ago, giving us the Isthmus of Panama (say about 3 M to 2.7 M years). Prior to that closure, with our 2 great oceans connected, might we expect the Arctic Ocean to have been under some different influences?

        The full age of Earth is thought to be 4.54 ± 0.05 billion years.
        That ± 0.05 billion is a wide range.
        None of these numbers is of much interest when writing about the ice floating on the Arctic Ocean.

      • You mean, not including those that who were harmed by the ‘mitigation’ for AGW by those who think they know better?

      • Yes. Those poor people and other living things suffering the tyranny of the anti-life Green Meanies would surely vote for more fossil fuel pipelines and drilling if allowed, to include caribou and bears.

      • Great question, Chimp. Suffering can take on many forms ……

        My perusal of the internet identifies that between 140,000 and 300,000 birds are killed annually by wind farm turbine blade strikes in the United States. Based on the fines imposed on the Syncrude Oil Sands project in 2010, C$3 million (US$2.25 million – current exchange rate) for the very unfortunate death of 1,606 ducks and geese that landed in a tailing pond when the deterrent system was inoperative, by extrapolation wind farms should be paying between US$195 million to US$420 million in fines.

        ANNUALLY!!! I am sure that the majority of these birds are sparrows and swallows, however birds of prey who are on the “watch” or “endangered” list are killed trying to capture and eat the smaller birds.

        This is the cost of PERCEIVED climate change, killing wildlife with no consequence to the power generation companies (merely swept under the carpet), aside from forcing unreliable, overly-expensive renewable energy onto the consumer.

    • Everybody implies that the Arctic is melting. When you look at this animation, it becomes very clear that the older ice is not ‘melting’; rather it is flowing out of the Arctic between Greenland and Spitsbergen. This is due to Arctic Ocean currents, not some massive climate change.

      I don’t know what else you can “prove” with this animation, but it seems patently obvious that it is not supporting the claims.

      • Basically, it I only the Kara sea that is struggling to build ice this year.

        There is a perfectly ordinary jet stream wobble with a high pressure system over the Kara sea causing a slightly “less cold” region.

        But that means there is a “more cold” anomaly drifting about elsewhere, as many in the USA and Europe have discovered.

        ie ITS WEATHER !!!!!

      • All the troll cares about is that there is less ice now, why that may be isn’t relevant.
        In it’s mind, less ice is proof that CO2 is going to kill us. Any other explanations just show that you reject science.

    • It must have been awfully disappointing for the boy who cried wolf when there really was a wolf and nobody believed him. Not to mention how disappointing it must have been for the sheep.

    • McClod.. Did you know that during the first 3/4 of the Holocene, Arctic sea ice was often zero in summer?

      McClod… Did you know that 1979 was up there with the EXTREMES of the LIA?

      McClod… do you have any historic perspective whatsoever?

    • Age of ice is utterly irrelevant to polar flow conditions because it’s a flow. THIS IS POLAR UNDERSTANDING 101: POLAR ICE CYCLES OUT and NEW ICE CYCLES IN to the point that the NORTH POLE is NEARLY ICE FREE at TIMES.

      You’re f****g INSANE if you thought that was important. Again: THIS is POLAR UNDERSTANDING 101: SPIN of the EARTH and total conditions often suck warm air up and over the ENTIRE NORTH POLE AREA and there has never been a time known when men were there, that this hasn’t been the case.

      Your own graphic PROVES the POINT: it’s a GIANT SPINNING BOWL with an OUTLET: and from yeat to year it’s simply a matter of W.I.N.D.S. whether ANY ice survives.Entire regions of the polar north are often seen either very lightliy iced or not iced at all, to then fill in and never be seen that low for a decade or more.

      The notable thing about almost all climate freakers is the stunning lack of grasp on even fundamentals of scientific reality.

    • Must I continue to point out the obvious? At the peak of our current inter-stadial period I have no choice but to accept the null hypothesis: Current warming patterns are to be expected under natural conditions. The low ice conditions are natural occurring phenomenon in an interstadial peak period. Unfortunately the lack of Arctic ice, even ice free conditions, indicates imminent (likely within the next few thousand years) stadial slide. It is up to you to tell us why, based on previous paleodata that something has changed. And don’t go on about CO2. It is always high at the peak and always low at the trough. One more thing. Don’t bring up minute changes in CO2 parts per million change unless you can show that there is enough energy in the anthropogenic portion to stem the enormous cyclical periods of the past 800,000 years.

      • What has changed is the rapidity of the change. The probability the warming of the past few decades is just natural variation is not zero and its not 100% but it lessening with each warmer than average year.
        I don’t think anyone would be too concerned if the Arctic took a few thousand years to thaw. Its happening a bit quicker than.

    • The Arctic Ocean was NOT 6C above normal in 2016.

      Do you know what would have happened had the Arctic Ocean been 6.0C above normal. The sea ice would have completely melted out in early June and it would not have frozen back at all until November.

      These numbers are completely physically impossible and somebody needs to pay a price for such BS. I propose a 3 month salary package. Somebody should explain this to Trump.

    • This episode reminds me of the big Crack in the Larsen C ice-shelf which has been in the news recently.

      Do you know that this crack has been there for probably 100 years. The ice-shelf expands out over a mountain chain that goes below sea level half way out on the shelf. As the shelf moves out, big pressure ridge cracks form up every 5 kms or so along the mountain chain.

      As the ice-shelf moves out to sea, the cracks go along with it. There are at least 6 very similar cracks coming in right behind the big crack that is the subject of the current news.

      This big crack has moved 20 kms out to sea since 1984 and going by the way it moves out to sea, it probably formed at least 100 years ago.

      The shelf has actually grown by about 20 kms in the last 32 years. That is the opposite of “melting”. It is growing. Yeah, it is going to produce an ice-berg soon since it has expanded so far out to sea. WHY did no story produced by the “climate scientists” talk about this.

      Before your own eyes, the area in question. The far right crack is the one they are talking about but there are at least 6 more coming behind it and going by where they form for the first time, it has taken at least 100 years for the crack in question to get to where it is.

      https://earthengine.google.com/timelapse/#v=-68.2413,-62.12917,6.717,latLng&t=0.14

  6. As for the Continental USA, which has fantastically dense thermometer coverage as seen above

    Until you zoom in, and then you see many are 100 – 200 miles apart.

  7. At first look the Arctic interpolations looks like pure bs. As do ALL their other data manipulations that in almost every case conveniently cools the past and warms the present. Worst for them is that in a private email the Chicken Littles were exposed as willfully desiring to manipulate the temperature record to further their leftist cause:

    ClimateGate email: Warmist Tom Wigley proposes fudging temperature data:

    2009: “If you look at the attached plot you will see that the land also shows the 1940s blip.. So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean … [Tom Wigley, to Phil Jones and Ben Santer] http://tomnelson.blogspot.com/2011/12/climategate-email-warmist-tom-wigley.html

    Lo and behold, after that email … THE “BLIP” WAS REMOVED!

    These ideologically driven doomsayers are guilty as sin. It was warmer in the 1930s. There’s no global warming.

    Their temperature record is pure garbage. So is their endless fear-mongering. Over and over again the top alarmists have been caught telling their acolytes that they all should offer pure lies in shameless fear-mongering:

    “We have to offer up scary scenarios… each of us has to decide the right balance between being effective and being honest.” -Stephen Schneider, lead IPCC author, 1989

    Their data is lies. Their scare-mongering about the future is lies.

    • Tom Wigley the Englishman is another fraud who very rapidly went to ground and hid out awaiting the day when his grants scams and infrastructure documents falsification were uncovered. He and Jones and Hansen and Mann were some of the inner 14 to 18 government employees who were enriching themselves using the scam when Al Gore’s movie came out – their bluff was called – and the world began to examine the FRAUD THEY HAD BEEN PERPETRATING for over TWO DECADES related to COMPUTER FRAUD
      getting GRANTS
      to use GOVERNMENT SUPERCOMPUTERS under their OWN OVERSIGHT during BUSINESS HOURS
      paying the RENT (electricity to run the computers) with the GRANTS

      POCKETING the REST.

  8. So what about HadCruT? They don’t do any infilling of areas where there are no stations, they just take an average based on places there are measures. What does that say?

    • HADCRUT uses CRUTEM data on land. But arithmeically, it is always interpolated. You have to calculate the spatial integral of a function defined by a finite set of values (which is all you’ll ever have). You can interpolate and integrate that. Or something else. It makes no difference. It always ends up as an area-weighted average.

      • Nick
        Ever considered the elementary and scientifically sound option of acknowledging that we have no reliable data for the polar regions and work only with what verifiably trustworthy data we actually have?

      • “Ever considered the elementary and scientifically sound option of acknowledging that we have no reliable data for the polar regions”
        You never have perfect data. And in practical continuum science, you have only a finite number, often small, of measurement points, and everywhere else, everywhere, has to be estimated. Usually interpolated. So the elementary requirements are:
        1. Use the best estimate you can, using nearby data where possible.
        2. Figure out how much uncertainty arises in the integral (average) from that estimation.
        And that is what they do. When GISS etc quote a 0.1C error, or whatever, that is almost entirely “coverage”. Which is the uncertainty of interpolation.

        The elementary and scientifically sound option is to read what they actually do, and why.

      • 1. Use the best estimate you can, using nearby data where possible.
        2. Figure out how much uncertainty arises in the integral (average) from that estimation.

        But we don’t have any nearby data at the poles, and as a result we can not even estimate the uncertainties that arise! Neither 1 or 2 apply for Arctic/Antarctic areas, and those are where most of warming is found.
        Why not exclude them completely? Or at least provide 2 products, one of them with pole areas excluded?

      • Udar,
        “Why not exclude them completely?”
        You can’t exclude them and claim a global average. If you just leave numbers (eg grid cells) out of an averaging process, that is equivalent to assigning to them the value of the average of the others. You can see that that would give the same answer, and it becomes your estimate. And then you can see that it isn’t the best estimate, by far. I go into this in some detail in this post and its predecessor. It’s better to infill with the average of the hemisphere, or somewhat better, the latitude band. But best of all is an estimate from the nearest points, even if not so near. Then you have to calculate the uncertainty so that you, and your readers, can decide if it is good enough. But it’s your best.

      • Udar pointed out that the in-filling in the arctic was made up entirely of large positive anomolies. That just doesn’t pass the eye test when you look at the high spatial frequencies (rapid changes in temperature across short distances) evident in the ‘raw’ data. Too few real data points to make infilling on that scale reliable.

      • And when the majority of your “trend” lies (in all meanings of the word) in that made up data, to claim it’s the “best estimate” doesn’t even pass the laugh test.

      • “You can’t exclude them and claim a global average.”
        Nick, I take your point that HADCRUT still always uses an interpolation function within the grids that they actually include data for. But they treat grids for which there is no data differently to BEST and GISS etc. in that they leave these out altogether. Many of the missing grids are in land areas with low population – particularly at high latitudes. They don’t claim that their average is truly global. The satellite data confirm that it is these very areas that show the most warming, and that bias probably explains both why CRUTEM shows less warming than the land surface data sets (and also why their trend creeps upwards as new stations do get added from those sparse regions). For me, the telling thing is that even with the missing grids, their data still shows 2016 to be another record.

      • You can’t exclude them and claim a global average.
        Well, I don’t really see how you can claim global coverage by including made-up data.
        I think that the answer to your dilemma of “global coverage” is very simple and pretty obvious.
        You just can’t claim global coverage period. Anything else is being dishonest.

        I understand your argument about why infilling is good – but you are missing a very big and very important point comparing infilling of one point among many known v.s. infilling many unknown points for very few known.
        The problem is essentially undersampling – you are not getting enough data to sample near poles at above Nyquist rate.
        In this case you results could be completely wrong – and you have no way of knowing if they are. I am not sure how you can claim that making up aliased data is better than not – as EE with a lot of DSP experience, this is going against everything I’ve learned and done in my 30 years of work.

      • A global average is a made up figure anyway. So if you leave off the poles, does that mean it is a less or more made up average?

        Here is my take: 1) leave the poles off, 2) if the remainder of the world is warming, I’ll bet the poles are too, 3) if the rest of the world is cooling, then I’ll bet the poles are too.

      • Nick,
        You still don’t get it. If there is no data don’t extrapolate/interpolate or otherwise fabricate it – simply admit there is no reliable data. The problem with that approach is of course that one loses the ability to claim exceptionally high rates of warming.

        Even with the sophisticated plug-ins available, interpolating data from a complex picture file in Photoshop will give you a shitty result. And if the scientists in bioengineering I’m associated with were to extrapolate “results” from incomplete data sets and tell me they’re “hard data” they would be fired.
        Mind you, that’s the real world with peoples’ lives at stake, not climate “science”…

      • To all those people having a strange skeptic attitude against infilling of data by interpolation, using e.g. kriging, id would like to say you are all far from reality.

        Thousands of engineers working in several different disciplines (e.g. mining, road construction or graphics support using bezier splines) are confronted with the problem of having far less data than needed. And for all these engineers (me included), lacking the ability to interpolate would (have) mean(t) the unability to solve their daily problems.

        One of many hints to the stuff:
        http://library.wolfram.com/infocenter/Courseware/4907/

        *

        Moreover, many of you seem to think that we have no valuable temperature data in the polar regions as infilling sources just because they would be so few of them.

        This opinion is imho valid solely for the Antarctic. For the Arctic I do not accept it, because we have on land about 250 GHCN stations within 60N-70N, about 50 within 70N-80N, and – yes – only 3 above 80N.

        Before you start laughing loud at me and my opinion you might consider stupid, you should first have a tight look at UAH6.0’s 2.5° tropospheric grid data in these Grand North latitudes.

        You would discover that the average trend for the satellite era (1979-2016) in the latitude stripe 80N-82.5N is 0.42 °C / decade, and that its ratio to the average trend for the three 80N-82.5N GHCN V3 stations (0.70 °C / decade) is about the same as at many places on Earth.

        *

        Last not least, let me show you a graph depicting – again in the UAH context – how few data one sometimes needs to have an amazing fit to what was constructed out of much more (I was inspired by a post written by commenter „OR“ alongside another guest post here at WUWT).

        UAH’s gridware consists of monthly arrays of 144 x 66 grid cells (there is no data for the latitudes 82.5S-90S and 82.5N-90N). OR had the idea of comparing the trend computed for n evenly distributed, equidistant grid cells with the trend computed for all cells.

        Here you see plots for two selections of 16 resp. 512 cells compared with that for all 9,504 cells:

        It is simply incredible:
        – with no more than laughable 16 cells you perfectly reproduce many ENSO peaks and drops;
        – with 512 cells (about 5% of the set) you get a linear trend differing from the original by 0.04 °C par decade, and the 5 year running means are amazingly overlapping.

  9. Not sure it’s going to cool.
    But it does look more and more as though the current warming is a regional effect around the North Atlantic and the adjacent landmasses.
    Global warming it is not.

  10. Well, it is cold indeed in Paris these days (-3/4° at night), note that it is not really abnormal. A few years ago, it tended to be a lot worse during the winter. Actually, olive trees bloom in Paris nowadays, they would never have survived fifty years ago. Somebody must have changed the thermometers.

    • Or perhaps the urban landscape of Paris has “changed” to include more release of waste heat from buildings and doorways and Metro vents….
      It couldn’t possibly be the most simple and obvious cause….it requires somebody to sneak around the city at night surreptitiously replacing all the thermometers.
      Perhaps all the miscreants did was secretly push the glass bulb up a bit higher on scale.

    • Paris was indeed colder 50 years ago, despite over 20 years of increasing CO2. The air was dirtier, too, further adding to the natural cooling cycle.

      Warmer temperatures today have little if anything to do with even more plant food in the air. Olives too benefit from more CO2.

    • 50 years ago was near the end of the last cold phase of the PDO/AMO. So it’s hardly surprising that it was colder than.
      How much has the population of Paris increased over the last 50 years?
      A couple million new people is worth a degree or two all by itself. Not to mention all the heat being consumed by all of the contraptions that didn’t exist 50 years ago.

    • Francois
      “A few years ago, it tended to be a lot worse during the winter. Actually, olive trees bloom in Paris nowadays, they would never have survived fifty years ago. Somebody must have changed the thermometers.”

      Evidence please. Olive trees can survive up to 15 F (-9.4C) for a limited amount of time. The great frost of 1956 killed off a lot of the olive trees in France (-14C) but that was one year. The average low temperatures in Paris for the past 70 years (that I checked) according to the Paris/Orly Airport records were perfectly fine for olive trees….with the exception that your summers could be warmer and longer for them to really thrive.

  11. Why is it so hard to figure out that during recovery from a cold period (LIA), there will be a continuing series of years with “record heat.” This is even more obviously true when the measurement record begins just at the end of the cold period.

    Dick Lindzen pointed that out years ago, and it’s been studiously ignored by the Justin Gillises of the world ever since. Ignorance of something that obvious can only be willful.

    These people are beyond stupid, and well into reckless.

    • The year is 2017, the “little ice age” ended circa 1860, rougly a century and a half ago, do read the papers, stay informed…

      • It took hundreds of years for the warmth of the MWP to decline to the depths of the LIA in the 19thC, known as the coldest period since the ice age.

        How long do you think it will take to warm back up again?

      • Paul,

        Your point is valid, but the depths of the LIA were in the 17th and early 18th centuries, during the Maunder Minimum.

        The Modern Warm Period is still cooler so far than the Medieval WP. We are well within normal limits.

    • I have wondered if somebody has done an analysis of the rate of record years. With a steadily climbing temperature from the LIA one would expect that record high years would occur reasonably frequently. So has the frequency of record high years gone up???

      • BCBill asked:

        I have wondered if somebody has done an analysis of the rate of record years. With a steadily climbing temperature from the LIA one would expect that record high years would occur reasonably frequently. So has the frequency of record high years gone up???

        Well, over at … http://www.climatecentral.org/gallery/graphics/record-highs-vs-record-lows … , we find this:

        … which seems to indicate that the ratio of record highs to record lows has, in fact, gone up. I think this RATIO might be what you are asking about.

        Even so, alarmists attributing humans as the cause are perverted wishful thinkers. Humans are merely seeing a shift during their meager life-spans, of an upward trend that started long before human record-keeping of such things began.

        Sorry, alarmists, I know you want it to be OUR fault, so that we can have some hope of FIXING it and CONTROLLING it, but we cannot take credit for the cause, cannot take responsibility for the fix, … must face the reality that nature is controlling US, and we better adapt, live life as intelligently as we can, do our good deeds for the RIGHT reasons, and stop all this hysteria over what has been going on long before our little brains (with big egos) came into the picture.

      • @ R Kernodle…What my eyes see when looking at that graph is the clear sign of warming as the dominant trend up to the mid 1940s. Then a cooling trend sets in and lasts to around 1976/77. Lastly, the next warm trend takes off at the end of the 1970s with the warm temp ratio dominating up until close to the end. That looks cyclical. Note that 2008/09 dropped close to a cool temp ratio. I would expect the next several decades to favor the cool temp ratio, if the pattern is cyclical.

      • Robert, thanks for getting back to me. That isn’t quite what I was looking for. If you imagine a straight line of temperature rising then each year is a record. Since yearly temperature bounces around, some years are records and some are not. There is much to do about 2015 was a record and 2016 was a record and every year is a record. But my question is more along the lines of are we having more record high years now than we had in the past? Given that the temperature has been rising pretty steadily since the LIA, I imagine there are quite a number of record years out there. Has the frequency of record years increased? I would like to know.

      • I left this note for Anthony but re-posted it here in case anybody can shed any light. This is just more clarification of what I mean about the frequency of “hottest years ever”. With all the hoopla about the hottest year ever and the three hottest years in a row, or whatever, I have been wondering how often we normally get the hottest year evah! For example, 1854 might have been the hottest year evah, and 1855 might have been the hottest evah also, and then 1856 might not have been. Since the temperature of the earth has been going up more or less constantly since the LIA, there must have been quite a few hottest years evahs. Is there some way to determine this? It doesn’t seem like it has been done as searching for it is turning up nothing. This could be a very nice bit of observational data to release about now. People like to say things like the 10 hottest years recorded happened in the last 15 years (or whatever the actual numbers are), but what if we could say the same thing of the 1930s? Did the 10 hottest years ever up to 1940 happen in the 1930s? If the earths temperature has been constantly rising for a while, it would be fairly normal to have a series of hottest years ever in close proximity.

      • BCBill January 19, 2017 at 4:43 pm wrote: “Robert, thanks for getting back to me. That isn’t quite what I was looking for. If you imagine a straight line of temperature rising then each year is a record. Since yearly temperature bounces around, some years are records and some are not. There is much to do about 2015 was a record and 2016 was a record and every year is a record. But my question is more along the lines of are we having more record high years now than we had in the past?”

        No, we are not having more record highs now than in the past. Here’s a chart for you (thanks to Eric Simpson):

      • Forgive my density, BCBill, I’m still not quite clear on what specifically you might be asking for, but just to attempt to shed more light, I was looking at the following graph:

        … and what I see is a roughly cyclical progression, from warm to cooler, to warm, where each warm stretch seems to be longer and hotter, … which seems to imply that there IS an overall heat build up that is happening faster.

      • … and if this IS a cycle, then it looks like we might be teetering on the edge of another drop, and this particular drop could possibly be one of the biggest drops we have seen.

        If that big drop DID become apparent over the next few years, then I think that this would drive the final nail into the coffin of the warming claim pushed over the past few decades.

        I think that this would remove human activity, at its present or foreseeable level, from the equation. I also think that this would vindicate carbon dioxide, not only as a forcing agent, but as any significant agent at all.

    • Not to mention a basic fact from sampling statistics which basically promises you ‘record’ measurements the more measurements you make: the probability of measuring extreme values is so low that you need many many measurements to hit that extreme value.

  12. Neither satellite record supports NYT. And the surface GAST anomaly isn’t fit for purpose no matter who produced it: on land, UHI, microsite issues revealed by the surface stations project, and global lack of coverage as discussed in this post. Dodgy SST pre ARGO, hence Karlization. Past cooled and present warmed repeatedly since 2000, provable by simple comparisons over time. And the changes are greater than the previousmor present supposed error bars. NYT propaganda.

    • ristvan
      “Neither satellite record supports NYT”
      Plenty of satellite records do. UAH V5.6 says hottest by 0.17°C. RSS 4 TLT version has it hottest by 0.17°C. Their news release starts out:

      Analysis of mid to upper tropospheric temperature by Remote Sensing Systems shows record global warmth in 2016 by a large margin

      • Soo, refer to some obsolete stuff. Then emphasize RSS whose CTO Mears disavowed it before he changed it. As predicted by Roy Spencer. Not good form, Nick. And you know it.
        What about the stunningly sharp now 10 month temp decline since the El Nino 2015-16 peak? Got a reply for that natural variation?

      • Rud,
        A bit of cherry picking there. You like UAH6 because it’s shiny new. And you like RSS V3.3 because, well, because. Even though it has carried a use with care label for much of the year, and their latest report reiterates why RSS think it is wrong.

        The fact is that the strength of satellite data is thin. It rests now on one index which is in more or less complete disagreement with the still produced prior version.

      • Well, yeah, Nick. It’s an El Nino. It goes up, and then it comes down. Why are you investing anything in it for the long term?

      • Nick, are you suggesting the sattelite data are not valid? If they are unreliable then they are not valid. So is that your point? We should assign a low credibility weight to the sattelite data ? If so, can you elaborate?

      • Bart,
        “But, when it comes down, the pause becomes apparent again”
        The dive in Dec brings it to the pause mean. To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year. About a year around zero should do it. But historically, that isn’t seen. I think that dip won’t last.

        RW,
        “We should assign a low credibility weight to the sattelite data ?”
        If you really need to know the temperature in the troposphere, they are probably the best we have. But beyond that, yes. You need only look at UAH’s release post to see what goes into the sausage. Or listen to Carl Mears, the man behind RSS:
        “A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!).”

        But you can quantify this just by looking at the changes to GISS vs UAH, which I do here. You have to down-weight a measure if it says very different things at different times.

      • “To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year.”

        That is why trend lines are not the be-all-end-all of analysis tools. In actual fact, if the temperature goes back to what it was before the El Nino and stays there, then the El Nino blip should not figure in your conclusions.

        Trend lines are primarily a method of data compression, not a method of divining truth – it is more compact to provide an offset and slope than it is to present a chart. Your brain can determine patterns that trend analysis does not convey. Don’t let the machinery do your thinking for you.

      • Bartemis January 20, 2017 at 5:58 am
        “To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year.”

        That is why trend lines are not the be-all-end-all of analysis tools. In actual fact, if the temperature goes back to what it was before the El Nino and stays there, then the El Nino blip should not figure in your conclusions.

        Using that logic you should also eliminate the 1998 El Niño which would make the claim of a ‘pause’ difficult to sustain.

      • To be consistent, you would have to eliminate both El Ninos and La Ninas.
        This has been pointed out to you before.

      • Nick, thanks for the link. So uah makes adjustments which result in slower (recent) depicted warming.

        Considering, though, the apparently high sensitivity (spikes) for the past two super el Nino’s 97/98 and 15/16, I doubt that the satellite measurements are not valid. Compare percent increase from truncated baselines for those two spikes to percent increases in other measures sensitive to the el Nino’s. That would help test validity: is uah and RSS appropriately sensitive to big el Nino’s? If yes, then A+, if no, well then perhaps they are picking up something in addition to temp that is highly correlated with big el Nino’s.

    • In Canada we once had a politician who asserted that the unreported crime was sky rocketing. He had a little trouble coming up with the data to support his claim. Surely the unreported temperatures are sky rocketing?

  13. “2016 was not a record warm year in the USA, 2012 was”

    You do realise that the USA is only 1.9 % of the worlds surface area don’t you? I fully agree that there are imperfections in the global temperature data sets. The surface sets are inhomogeneous and in some places poorly sampled either in space or over time and thus require various assumptions – either through extrapolation (e.g. BEST, NOAA, GISS) or simply omitting areas that aren’t covered from present averages (HadCRUT). The satellite sets require large adjustments, cross calibration and careful modeling to account for drift, orbital decay and the limited life span of individual instruments. But there is still a reason why each of these datasets at least attempt (unlike the above statement) to take a global view of climate.

    • “assumptions”—meaning no data available so some is created to fill in. IE, not really data.

      The statement clearly refers only to the USA—what part of “IN THE USA” do you not understand? Global was not part of the statement. Contrary to popular belief, everything does not always have to “global”. Except to a few true believers, it seems.

      • Reminds me of one of my favorite Dilbert strips. Dilbert explains that one of the QA tests is flawed, and asks if he can fake the data; to which the pointy-haired boss replies, “I I didn’t even know data can be real.”

  14. Major headache, (minor head injury…slipped on some damn global warming that accumulated on the street) so could someone explain/clarify/validate:

    The L-OTI Anomaly with interpolation is 0.82 and the L-OTI Anomaly without is 0.73. That’s a difference of only 0.09.
    Can they actually measure to that accuracy?
    What is the error margin on these “estimates”?

    And if those two numbers (0.73 and 0.82) are the 2016 “average anomaly” both with, and without interpolation, in relation to the average from 1950-1981, then the average anomaly for the past 30 years is 0.020-0.022 per year and 0.20/0.22 per decade…so where the crap does Robert Rohde get a 1.5C “trend”??? Do these people know the difference between temperature trends and temperature anomalies and trends in anomalies vs trends in temperatures?

    • Aphan January 19, 2017 at 12:43 pm
      My sympathies, mine was the week before Christmas, Sunday morning no snow but very cold wind storm ( n.w AZ.) Did not go at first to emergency , later did they took a bunch of gravel out of my forehead. Hurt for a few weeks. nice scar in the making

      michael

  15. This was also in the article…
    … Scientists have calculated that the heat accumulating throughout the Earth because of human emissions is roughly equal to the energy that would be released by 400,000 Hiroshima atomic bombs exploding across the planet every day…

    How on earth can these people distort the facts, spin & mislead the reader so blatantly?

  16. really appreciate your work done, but you only show data from december. what about every other month of the year or is december the outlier with the “highest overestimation” of the anomaly?
    A dataset for every month would better prove your point.
    Another question would be, what would the global mean show without interpolation at all?

    [try reading better – mod]

  17. Once again the Arctic Ocean shows a hot spot over the Gakkel Ridge, south of the Nansen Basin, where in recent history there have been submarine lava flows creating a vast new province of a volcanic trap similar to the Deccan Traps and the Siberian Traps. Once again submarine vulcanism is totally ignored (disregarded?) as a significant source of heat input to the bottom of the Arctic Ocean.

  18. I keep hearing on the news that the last 3 years have each set a new record for “hottest year evah” but that just doesn’t make sense to me. We know that 2016 was warmer due to the el nino but it was only barely hotter than 1998 according to every article I have read so far. Looking at all the charts 2015 and 2014 don’t even seem to come close to setting records above 1998, even with all of the manipulations to cool the past. Am I missing something here?

    • “Am I missing something here?”

      You won’t miss anything if you look at a proper chart like the UAH satellite chart which shows 2016 as barely hotter than 1998.

      The surface temperature charts have been manipulated to remove 1998 as the hottest year, and to make it appear that it is getting hotter and hotter every year so NOAA/NASA can claim it is the “hottest year ever” each year, like they are again doing this year.

      They are actually correct with regard to 2016, but incorrect with regard to any other year between 1998 and 2016. 2016 was one-tenth of a degree hotter than the hottest point in 1998, but 1998 still holds second place (actually a tie with 2016, if you want to get technical).

      Anyway, your instincts are correct, 1998 is hotter than every subsequent year but 2016.

  19. As far as US temperatures, we got a double top from 2006 / 2012, then a declining trend until now. As mentioned the US station coverage is way better than most of the rest of the world, so probably the decline in US temperatures is also a decline that’s happening worldwide, notwithstanding the pure garbage on the Arctic.

    Further, the idea the US data should take preeminence is even more true when you look at the 1930s when the rest of the world had *very* sparse station coverage.

    In 1999, before radical data manipulations, NASA data made it clear that it was hotter in the USA in the 1930s:

  20. And still the focus is on a single manufactured figure. I think this is entirely backwards. Start by looking at what the temperature record shows about trends at each location. Then show the distribution of trends.

    The manufacture of the single figure throws away almost all information. It is no good then looking at it through a magnifying glass trying to discern meaning.

    • Forrest Gardener nailed it.
      global average temperature is about as useful as global average telephone number.

      separate the locations – and further, separate the TOB.
      chart the like with the like unless you want to make mud.

    • You got it. If the globe is warming then it shouldn’t take but a few thermometers around the globe to show it. Using fake, interpolated, imputed data in order to obtain a “global” figure is way over doing it.

  21. So we proved that 2014 wasn’t the hottest year. And then, no, it wasn’t 2015, and now, no it wasn’t 2016. So what was the hottest year?
    I guess I’ll be told 1934.

    • Nick, you let yourself down when you pre-empted the answer you might receive.

      Where I live some years are wetter or drier but the temperature pattern doesn’t seem to change all that much and the quest for a hottest year is pretty meaningless.

      What makes you think that a single manufactured figure can reasonably be representative of the entire earth’s surface? And what makes you think it can be given to two or three decimal places? And what makes you think that trends in that manufactured figure have any physical significance?

      Weather is local!

      • Forrest,
        “What makes you think that a single manufactured figure can reasonably be representative of the entire earth’s surface?”
        This is the third year that a record is announced, and suddenly all sorts of reasons why this year’s number can’t be believed. The reasons (like station gaps here) would apply any year, but suddenly become pressing when the temperature is up.

        But yours is the most comprehensive – we shouldn’t talk about global temperature at all! Yes, that has been popular. There is no way of knowing whether the Earth is warming, so it isn’t a problem. But then, WUWT has for ten years been talking about global temperatures. Grumbling about untidy stations, showing solar effects, predicting cooling. What would WUWT have to talk about if there was no global temperature?

      • Nick-
        “This is the third year that a record is announced, and suddenly all sorts of reasons why this year’s number can’t be believed. The reasons (like station gaps here) would apply any year, but suddenly become pressing when the temperature is up.”

        We’ve pretty much talked about the “records” and all the reasons why the numbers can’t be believed for a very long time. (pppppsssttt…you know Anthony has spent many years studying the station gaps and siting issues) But then YOU contradict yourself beautifully in the next statement: “But then, WUWT has for ten years been talking about global temperatures. Grumbling about untidy stations, showing solar effects, predicting cooling.” (so see…..all sorts of things have been pressing here at WUWT)

        “What would WUWT have to talk about if there was no global temperature?”

        Maybe recipes, or politics, or what Nick Stokes is doing for a job these days, since there’s no global temperature and climate science is an exacting, professional field devoid of conflict and filled with logic and reason?

      • Nick, you seem to be having trouble focusing at the moment and drift off into exceptionally weak rhetoric at the drop of a hat.

        Just for you, here are my questions again:
        1. What makes you think that a single manufactured figure can reasonably be representative of the entire earth’s surface?
        2. And what makes you think it can be given to two or three decimal places?
        3. And what makes you think that trends in that manufactured figure have any physical significance?

        Have a go at those. Should be easy!

      • Nick,

        Hardly all of a sudden. Skeptics have said that the antiscience, antihuman works of fantasy by NOAA, GISS, HADCRU and BEST were packs of lies for decades.

      • Gloateus,
        “were packs of lies for decades”
        Actually, they haven’t. It’s become a lot more shrill recently. But what we never see is any attempt by sceptics to calculate an average themselves. Unadjusted data is readily available. It isn’t hard.

        Actually, I shouldn’t say never. Back in 2010, Jeff Id and Romanm made a valiant effort. I’ve continued using some of their methods. And then, they just ended up getting the same results as everyone else.

        And of course the recent sceptic scientific audit of the indices just disintegrated. Nothing to report.

    • Nick,
      For the sake of argument, let’s assume that the global temperature anomaly as currently calculated is a valid, useful metric. Why should the average person really care what the hottest year was? What does that, by itself, tell us, other than satisfy idle curiosity? I’m serious. What is the justification about all the hand-wringing over record this and record that when talking about the natural world?

      • Paul,
        Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.

      • Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.

        Except there’s evidence showing co2 has little affect on minimum temp, so we’re experiencing almost all natural climate.

      • Nick said:
        “Paul,
        Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.”

        Our past is made up of “yars” too. 4.5 billion yars (according to scientists). And all of the empirical evidence screams that Earth’s repeated patterns of behavior have been…0.1C here, 0.1 C there, and soon enough, an ICE AGE ends (like the one we are currently still living in) and there is a nice, warm, thriving planet for everyone (well…except now days just for people who aren’t stupid enough to build on coastlines that have repeatedly been submerged by this planet). And then the cooling returns. Surely you aren’t an empirical evidence science denier…..???

        But lets suppose that humanity CAN generate some “real warmth” with emissions from fossil fuels. That might come in handy if Earth decides it’s time to glaciate again, ya think?

      • “Individual years don’t mean much.”
        On that much we can agree. I would go further, though, and say they don’t really mean anything. Which why all the focus and headlines about “record” heat is just pure PR nonsense.

        “It’s a reminder of where we are going.”
        No, individual years tell us nothing about where we are going, record or not. It is entirely possible to have a record cold year even during a warming trend, and vice versa.

        “[+]0.1C here, [+]0.1C there and soon enough you’re talking about real warmth”
        Only assuming that all the other non-record years are neutral or positive. But they aren’t. So even if three or four of the last 10 years were record highs, it still does not mean anything. The others could all offset them for a trend of zero.

        So all this n of m years had record high temperatures is just sensationalist nonsense. And every scientist should denounce it as such. I can excuse the media, but not the climate science community that promotes this crap.

      • Nick,
        Yes, 0.1C per decade here, 0.1C per decade there. If we follow the current trend we will experience some warmth, maybe 1-1.5C, in a hundred years. Oh my!

        Of course, when the 95% confidence level is +/- 5C then it is just as likely we will experience cooling. My bet is the null hypothesis, more of the same cyclical natural variation.

    • Why don’t you satisfy me you’re capable of even solving for the temperature of air by telling me the name of the law of thermodynamics written for solving temperature of atmospheric air.

      Face it: your gurus got caught processing fraud. Mann/Jones/Hansen with their ”it’s a whole new form of math” that turns out over and over to be utterly worthless spam designed to steal grant money.

      Have you ever worked in gas chemistry in any way, at any time? I think everybody on this board knows the answer to that. You don’t have any school in atmospheric chemistry. You don’t have any school or work in atmospheric radiation or for that matter, radiation physics of any kind.

      How do I know that? You continue to claim you think the basic science of AGW is real science.

      If you think it’s real, then show us all here, – atmospheric science professionals and amateurs alike – that you’re atmospheric chemistry and radiation physics competent.

      Tell us the name of the law of thermodynamics to solve the temperature of a volume of atmospheric air, gas, vapor, etc.

      Tell us the equation and tell us what each factor means.

      Till you can competently do that you’re another fake on the internet claiming to understand something there’s no way you can,

      when you can’t even name the law of thermodynamics governing the field.

      And oh yes govern it, it does. It’s what gives the world enforceable legal and scientific certification standards that make the entire modern internal combustion, refrigeration/furnace and many other fields even possible.

      I’ll wait, you think up some lie.

      • “Why don’t you satisfy me you’re capable of even solving for the temperature of air by telling me the name of the law of thermodynamics written for solving temperature of atmospheric air. “
        Because this marks you as a crank, and I’m not interested.

    • As per my post above, when the earths temperature has been rising more or less steadily for more than a hundred years, warmest years ever are pretty commonplace. Even multiple warmest years ever in a row are probably commonplace. I have taken to measuring the snow depth in my yard as the snow is falling. I get a new record every few seconds. Warmistas are always trying to re-frame the debate from whether or not there is a large human influence in the current warming to there is warming so it must be caused by humans. How many warmest years in a row happened in the 1930’s? Anybody know??

      • BCBill
        “warmest years ever are pretty commonplace.”
        They are a lot more commonplace recently. Here is a plot of cumulative records of GISS since 1900. Every time a record is set, the plot changes color and rises to the new level. The current situation is not commonplace.

      • That’s a nice graph Nick and answers the question I have been asking, though I think that the late 19th/ early 20th century and from 1945 to 1976 were periods of slower warming/cooling. I understand they are somewhat atypical in terms of the climb out of the LIA . It would be nice to see a little further back. As many have pointed out, the thirties were a period of warming similar to the present though perhaps not as prolonged. Were there other periods similar to the present?

    • “I guess I’ll be told 1934”

      That’s what Hansen said. He said the 1930’s was hotter than 1998, and his chart (see Eric’s chart above) shows the 1930’s as 0.5C hotter than 1998, which means the 1930’s was hotter than 2016, too. Which means 1934 was the “Hottest Year Evah!”.

      And yes, the U.S. temperature chart represented by Eric’s chart is a good proxy for global temperatures (as good as we will ever have), imo, which is further strengthened by the Climategate dishonesty where the conspirators were concerned about the “GLOBAL” “40’s heat blip”, which they subsequently removed from the temperature records, to make it look like things are much hotter now than then. The Big Lie.

      The principal actors in the CAGW false narrative said it was hotter in the 1930’s “GLOBALLY” than it is now. They then went about changing the temperature record to make it conform to the CAGW theory. There is no changing this fact. Alarmists can’t dismiss the 1940’s heat blip as being restricted to the USA.

      • “That’s what Hansen said.”
        Yup. I predicted it.
        “which they subsequently removed from the temperature records, to make it look like things are much hotter now than then. The Big Lie.
        The principal actors in the CAGW false narrative said it was hotter in the 1930’s “GLOBALLY” than it is now. “

        Absolute nonsense. Below is a GISS plot (history page) showing versions since 1987. 1987 was met stations only, the rest land/ocean. You can see that globally, the 1930’s have never been rated close to present values; in fact, the highest value 1944 is rated higher now than in any previous version. There is no sign that temps were bumped down by 0.15°C.

  22. A couple of articles ago the graphs were from MET using the base line 1961-1990. The BEST graphs are using a base line of 1951-1980. The NOAA graphs don’t show the base line period. Why can’t you’all just use the same freakin’ base lines!

  23. From 600 Million Year Geologic Record I note 2 things
    1. Even when CO2 levels reached 7000ppm this planet did not experience run-away global warming. In fact at no time did such an event ever happen. Even with CO2 levels at 2 times to nearly 10 times the current levels, ice-age glaciation was not prevented. Showing how merge is CO2 in climate terms.

    2. Historically this planet is running very low on atmospheric CO2.
    If it drops just a little to 200ppm plants will struggle to survive, endangering all animals. Worse if it falls below 180ppm, as then plant life and all animal life, including humans, stops.

    So it may or may not be the ‘hottest year ever’ recently but historically it is nonsense.
    If only the 1930s ‘Grapes of Wrath’ years were not excised from the record, the climate choir would have to sing a different song.
    It may or may not be the ‘hottest year ever’ but CO2 levels are obviously not the climate driver.
    It may or may not be the ‘hottest year ever’ but that relies on your personal beliefs in the probity or otherwise of those making the claim.

    • “If only the 1930s ‘Grapes of Wrath’ years were not excised from the record, the climate choir would have to sing a different song.”

      That’s why it was excised. They didn’t want to sing that different song.

  24. Interpolation across a pole… has to be on a Top Ten List for “Ways To Flunk a Numerical Methods” Sr. project. Sigh.

  25. Both GHCN version 4 and Berkeley Earth have pretty decent arctic coverage, especially in recent years. That said, there are still no stations directly in the arctic ocean, so some interpolation is needed. But we know from remote sensing products (AVHRR and MSU) that the arctic has been freakishly warm during the last three months, so its a pretty safe bet that interpolated products are more accurate than leaving that area out (which implicitly assigns it the global mean temperature in the resulting global temperature estimate).

    Via Nick Stokes GHCNv4 station location plotter: https://s28.postimg.org/s8x7ppoel/Screen_Shot_2017_01_19_at_2_17_32_PM.png

    Via reanalysis (using satellite data): https://pbs.twimg.com/media/C2YmxhOUoAEMfwH.jpg

    • Zeke,
      Can you please quantify “freakishly warm” for us? And then please explain to us why three months of this “warmth” means anything in terms of climate?

    • It’s one thing to interpolate between two (or more) surrounding areas that are hotter and colder to estimate a local temperature. It’s another to extrapolate data and guess and the temperatures in the warmest areas.

    • The two most northerly stations are Eureka Canada (84N) and Svalbard (78N).

      They both had very warm years about 6C above normal in 2016 (yes I checked). Probably a fluke more than anything else but they also have very variable year-by-year records, just like every station. +/- 6.0C is not that unusual for these two stations.

      BUT, this does not mean the entire Arctic Ocean was 6C above normal in 2016. If that was the case, ALL of the sea ice would have melted out this summer. At best, the Arctic Ocean was 1.0C above normal, probably just 0.5C.

      This extrapolation technique across the polar oceans is completely BS. That means GISS and Cowtan and Way and Zeke as well.

      There are physical signs that have to be evident to show any ocean area being so far above normal.

      THEREFORE, because what I just wrote is actually factually and physically true, we should throw out ALL of these extrapolations across the Arctic Ocean and force people like Zeke above to be honest.

  26. This article reminds me of a Monty Python scene, crowd yells out “we are all individuals”, one person yells “I’m not”. Why do we have an article that makes a point of saying the USA temperatures are different to world temperatures, that could be done in any country, it does not mean anything when talking about average world temperatures.

    • Nick Stokes January 19, 2017 at 10:44 pm
      Thank you. There are too many responses I could make, so I’ll try just one. If you include ‘extrapolation’ to mean projecting and comparing sea temperature and air temperature, then the sea profile is central to the argument. Ask yourself, ‘What is the proper part of the sea profile to sample for T to compare with the air?’. The surface microlayer is in contact. Should it be chosen? The top 500m can mix and contact, can it be the one? Can we simply use whatever slice the Argo float happened to be at? Not on your Nellie, because the within-sample profile variation can be large compared to the effect being sought. Papers that choose among marine data sets to adjust for T bias and be pausebusters are clearly wrong because of this lack of being able to define and measure which part of the natural sea T profile is to be used to compare with air T.
      And both sea and air are in dynamic T states at any point on various time scales from minutes to days or more. My reading is incomplete, but my gut feel was that it is breaking new ground to try to use geostatistics or generally interpolation/extrapolation like this on dynamic sample data. It might be possible if we have detailed knowledge of the time dependency of the dynamics, but here at sea we clearly do not.
      Geoff.

  27. “What a difference that interpolation makes.
    So you can see that much of the claims of “global record heat” hinge on interpolating the Arctic temperature data where there is none.”

    Data is always being interpolated where there is none. It goes with any kind of continuum science. You can’t measure everywhere, you can only sample. Most people don’t have an AWS on the premises. But they still find weather reports useful. They interpolate from the Met network.

    So what always counts is how far you can interpolate reliably. That is a quantitative matter, and scientists study it. Hansen many years ago established that 1200 km was reasonable. It’s no use just saying, look, there are grey spots on the map. If you don’t think interpolation is reasonable, you need to deal with his argument.

    And there are checks. The Arctic has a network of drifting buoys, so it isn’t so unknown. Here is a map from a couple of years ago:

      • If I recall correctly, Cowtan and Way used the drifting buoy data as an out-of-sample evaluation of their interpolation, and found that it matched up pretty well.

        And how do they turn completely different types of data, one with only a general vague location into data you can compare to a fraction of a degree?
        I’m not sure what bothers me more, warmists thinking I should believe this, or me wondering if they really believe it.

      • “And how do they turn completely different types of data, one with only a general vague location”
        Why do you say that? As the map indicates, they know where the buoys are, I would expect to the nearest few meters at any time. And they will be taking air temperature, probably 1.5 m above surface.

      • Why do you say that?

        I was thinking they blended surface data with satellite data, but I was thinking they infilled the Arctic with satellite and as I was starting to type in realized that wasn’t correct, but likely the other way around.
        Then the only other concern is how the in band data was processed onto the average mean field. But it’s likely just taking the average of the mean buoy air temp is not comparing like to like. With all the homogenizing and infilling and all.

      • Nick,
        Please do not use bad science to impugn geostatistics.
        Yes, extrapolation from one point to another and interpolation between points are common methods in geostatistics and other methods.
        However, those sample points have conditions precedent before they can be used properly.
        In work familiar to me, and now talking only geostatistics, one does not interpolate between different media, as from sea to air. Or ice to adjacent water. Boundaries matter.
        Further, there has to be some knowledge of the properties assumed for or known about the points in a given medium. In rock work it is common to process different major rock types separately because they can have different fabrics with different alignments, leading to different ‘solids of search’ for later weighting and other complications.
        Now taking a vertical sea profile containing a buoy, do we have the equivalent of different rock types through the profile? Yes we do, especially in fine detail. The very surface of still water has sub mm layers impacted by long wave IR, different to lower down, being evaporated, special effects on T. The top 500 mm or so of sunlit water is often at higher T, by a deg C or more, than lower down. Proceeding down, you can meet thermoclines and ipsoclines before 100m down, the depth used by some to express overall surface sea temperatures. Therefore, such SST are by definition an average for some sort of T whose variation in that profile is large compared with the effect often sought, namely the T difference between one profile site and another. Even day/ night sea cases are different.
        It is mathematically wrong to use geostatistics when the within-sample static variation is much larger than between sample, let alone including dynamics on time scales of making a measurement. Yet, that is being done. Mixing by Nature can make results seem better, but they are not actually better unless the pre-mixing T distribution is known in detail so that the appropriate sub- sample can be compared site to site, apples to apples, later in the process. What part of a variable sea T profile should be compared to air T above? How do you know if you have captured it? Given the size of T variation down a profile, this is a fundamental impediment. Sure, you can grope around and get some general figures but these will not usually be good enough even for government work. It is stuff to kid yourself with.
        A further problem happens when air T is compared with sea T. Their thermal inertias differ. Some heating or cooling effects work to different time patterns. You cannot interpolate between air and sea because of this, except with huge assumption errors.
        For reasons like these, the Karl pause buster paper is invalid. The Cowtan & Way fiddles in the Arctic are wrong at Kindergarten level and should be retracted before doing more harm. Rohde from BEST might like to address some of these points to justify his recent revisionist work about the hottest evah. I had hoped he would have done better. Others like satellite T people should refrain from overextension of ideas linking air T to sea T. Again, it is invalid unless given a huge and correct error from non- physical assumptions.
        Geoff

      • Geoff,
        “Now taking a vertical sea profile containing a buoy, do we have the equivalent of different rock types through the profile?”
        I’m not impugning geostats; it’s a fine subject. My late colleague Geoff Laslett also spent time at Fontainebleau and Grenoble. We worked together at Geomechanics. But your argument here is way off beam. Interpolation is not used here to look at microlayers in the sea. It is used in surface averaging. And there is not a lot of inhomogeneity across the surface. There is the land/sea interface, for which a land mask is usually used. And ice is a nuisance.

        You have more complications in rock (not all of which you know in advance). But in the end it’s the same deal. You infer the properties of a continuum from samples, using geometry.

        Karl’s paper has nothing to do with details of interpolation; it is just calibrating the instruments. Cowtan and Way is fine; it basically shows that rational interpolation is far better than just “leaving empty cells out”, which assigns the hemisphere average to them.

    • Hansen is a criminal and a fraud and the fact you even reference him as scientific proves the kind of degenerate you are.

      • “Hansen is a criminal and a fraud and the fact you even reference him as scientific proves the kind of degenerate you are.”

        That’s a bit strong, isn’t it D. Turner? Criminal, degenerate, fraud?

        Geez, get a hold on yourself.

    • It doesn’t look like interpolation on the color charts. If it is, it certainly isn’t linear. It certainly needs to be justified.

      There’s nothing wrong with making an estimate where there are no measurements. But a record should be made of measurements and not estimates.

      • Estimates and interpolations and imputations should not be included as “data” in a data set at all. If a someone wants to use their own “estimates” to prepare a study then they should include the methods they used to obtain the estimates and the study should directly explain it uses the authors estimates, not measurements.

        If someone wants to use a “government” data set that has adjusted data, they should have to include a disclaimer that some of the data is estimated and how.

        Every time I see this I think of the old adage, “I’m here from the government and I’m here to help!”

    • “There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”

      ― Mark Twain, Life on the Mississippi

    • Nick said, “Hansen many years ago established that 1200 km was reasonable.”

      That seems like an awfully large distance. I know anecdotes are not scientific evidence, but I live in what’s commonly referred to as the “Inland Valley” in southern California. My nearest beach (Newport/Costa Mesa) is about 60 km away. There are times when it is 90 F here and 75 F there (wait, I’m not done), and there are times when it is 90 F here and 88 F there; rarely is it true that Costa Mesa would be warmer than Pomona in the summer (though the reverse is mostly true in the winter, because Costa Mesa’s proximity to the ocean moderates the temperature there), but it seems highly unlikely to me that anomalies in Pomona would be representative of anomalies in Costa Mesa. Costa Mesa’s temperatures see a much lower variance that Pomona temperatures, and the direction of change may correlated or anti-correlated depending on (e.g., wind patterns or cloud cover).

      I guess, now that I’m thinking about it, the 1200 km margin for interpolation is highly problematic if one is trying to interpolate inland from a coast (because of the moderating effect of large bodies of water on temperature changes, which ultimately also impacts the magnitude of the variance in temperature day-to-night, day-to-day, and even year-to-year), and that would be worse yet if one is interpolating inland from two coasts (or a surrounding coastal region into the interior of a large island, or polar region). I guess what I’m thinking is, looking at the coverage in Greenland (for example), it is not at all unreasonable that the coastal regions might slightly warm year-to-year for many years while the interior was doing something entirely different (in any given year) because of the much higher variability and higher magnitude of response to changes in other relevant variables.

      So, all that said, I don’t see how periphery anomalies can be used to interpolate inward (and, into higher latitudes) to the North Pole in a reliable manner.

      • Barbara,
        “I guess, now that I’m thinking about it, the 1200 km margin for interpolation is highly problematic”
        A lot of people think that. But you really need to quantify it. Hansen did that in his early days, and it has survived pretty well. You need to quantify just how much spatial correlation there is, and then the cost of whatever shortfall in terms of the uncertainty of what you are calculating (eg global average).

        I have a page here where you can see visualised anomalies for each month of land with GHCN V3 and ocean with ERSST4. This is the basic combination that GISS uses. The style of plot is that the color is exact for each measuring station, and linearly shaded within triangles connecting them. You can click to show the stations and mesh. I’ll show below a snapshot of Eurasia in Dec 2016. You can get the color scale from the page – it doesn’t matter here. left and right are basically the same scene, with right showing the mesh. The important thing is that the anomalies on the left are fairly smooth over long distances. Not perfectly, and the errors will contribute noise. But if you think of taking any one node value out and replacing it with an average of neighbors, the result wouldn’t be bad. Much better than replacing with global average.

        You asked about the Arctic. I wouldn’t recommend my gadget there; it’s treatment of the sea/ice boundary is primitive. But interpolation is in principle no different, and they do have buoys to check with.

  28. lts rather interesting that after the “hottest year ever” that the current snow cover extent in the NH is running above average.

  29. I just lurve the way they use orange – red – dry blood red in the ‘temperature’ colour scheme when they are talking about an ‘anomaly’ or a trend/decade value.. This is marketing 101 psychodramartisation of information. Its the visualisation of the word ‘DEADLY’ in its intent, to invoke fear and loathing. Its how the shamans have worked over the human mind for millenia.

    This corrupt, melodramatic, propaganda schlok belongs down the toilet of marketing history with cigarette advertising and the like.

    The Bureau of Meteorology and the TV networks use the same buulshit device on weather maps in Oz. 30˚C is pretty warm summer’s day down under but 40˚C is genuinely hot. Red and dry blood red kick in from 25 or 30˚C as iff thousands will collapse and die doing their Saturday shopping or while sitting in a cafe.

    • Since they are so fond of averaging all of Earth’s temperatures into a single number to scare us, I think they should average all of those colors across the globe into a single “average” color and paint the whole map with it and then tell us why that’s so bad.

  30. Nick, I have no problems with Arctic warming. It likely has probably done that at every interstadial peak. And CO2 has peaked along with it. The current pattern has been repeated several times the past 800,000 years.

    • PG, essay Northwest Passage argues it does that with a sine wave of 65 years or so. Qualitative, but backed up by Akasofu and extensive Russian records, some now translated into English.

  31. Mosh was arguing not too long ago that increasing CO2 would continue to cool Antarctica for decades…but BEST says it’s been flat or warming since 1970. Funny.

    • BEST Antarctica merely illustrates how messed up their methodology potentially is. Their regional expectations QC model excluded 26? months of record cold at Amundsen-Scott, the south pole, and arguably the best tended station on Earth. Certainly the most expensive. They did that based on their own constructed regional expectations. The nearest comparison continuous station is McMurdo, several thousand meters lower and about 1200 Km away on the coast. See fn 26 to essay When Data Isn’t for details.

      • BEST is another group grope by the usual suspects: self appointed climate ‘experts’ who haven’t ever worked with gases and vapors,
        much less actual atmospheres,
        in their lives.

        Every one of these so called ‘climate’ fakes is as transparent as asking them the name of the law of thermodynamics that governs the atmosphere.

  32. It isn’t necessary to deny and ridicule everything in order to be a skeptic. Until someone can, in a professional and scientific way, falsify all of the data which shows warming, we’ve got some warming. I don’t believe the models and I think there is likely some confirmation bias in data collection and analysis, but where is the data to the contrary except that posted by cranks. I’m not sure why Nick and Zeke give you all as much time as they do.

    • Thomas Graney, show me where the crankiness is in this paper (1 MB pdf), or this one.

      Or, for that matter, this one.

      They all show that neglected systematic measurement error makes the surface air temperature record unreliable. And the systematic error analysis is based on published sensor calibration studies, such as:

      K.G. Hubbard and X. Lin, (2002) Realtime data filtering models for air temperature measurements Geophys. Res. Lett. 29(10), 1425; and,

      X. Lin, K.G. Hubbard and C.B. Baker (2005) Surface Air Temperature Records Biased by Snow-Covered Surface Int. J. Climatol. 25 1223-1236.

      Papers like these, involving thousands of temperature calibration measurements, are the direct foundations for the estimates of air temperature error that bring forth the dismissive sneers from Steven Mosher, Nick Stokes and, apparently, you.

      • What this means is that if you use a broken ruler to measure the growth of a tree,

        Suppose though you can get a measure of the day to day change to the resolution of the minimum scale on that broken piece, and you are really only interested in how much it changes. Does the accurate height really matter alot then?

      • That would be solved by a 30 year running average of the annual average of day to day change, since for a full year, temp should average to 0.0 if there was no annual change. And while a single event doesn’t affect a 30 year average, a repeating pattern, if it changes will.

      • The uncertainty in an anomaly is increased over an individual measurement.

        This is because the uncertainty in the difference between a measurement and a mean is u_a = sqrt{sum of [(e)^2 + (u_m)^2]}, where u_a is the uncertainty in the anomaly, “e” is the systematic error in a given measurement and “u_m” is the uncertainty in the mean.

        The uncertainty in the mean, u_m, is sqrt{[sum over (N systematic errors)^2]/(N-1)}, where “N” is the number of values entering the mean.

        u_a is always larger than e.

      • This is because the uncertainty in the difference between a measurement and a mean

        while that is what some do, I compared two measurements for the anomaly, and they are correlated, so strings I can divided the one error term in half.
        I’m interested in how each stations temperature evolves over time, I have no interest at this point comparing to some made up global average with a large uncertainty range.
        And I found an equation for uncertainty that looks a lot like this one, (I have to check) but if it is, it’s already being calculated, and they are all 10^-5, 10^-6 couple orders of magnitude smaller than my calculations. Very uneventful. I have been looking for someone who can make sure I’m doing it right, so I’ve been saving your posts.

      • The lumpy bits that get thrown away

        Each slope is the average of a large number of stations (I think this is either US only or Global), and there is a nice slope between peaks that can be used, they are from a known amount of solar applied that is varying right along with temperature.

      • micro6500, get yourself a copy of Bevington and Robinson “Data Reduction and Error Analysis for the Physical Sciences.” If you google the title you may find a free download site.

        That book will tell you what you need to know. Unfortunately, it doesn’t say much about systematic error. Few error discussions do, and most of those treat it as a constant offset with a normal distribution.

        When systematic error is due to uncontrolled environmental variables, it’s not constant and cannot be treated as normally distributed. The only way to detect it is to do calibration experiments under the same measurement conditions. Data contaminated with systematic error can look and behave just like good data.

        The only way to deal with it, if it cannot be eliminated from the system, is to report the data with an uncertainty qualifier. All the land surface temperature measurements, except for those measured using the aspirated sensors in the Climate Research Network, are surely contaminated with considerable systematic error; all of which is ignored by the workers in the field.

      • The only way to detect it is to do calibration experiments under the same measurement conditions.

        Assuming true, we don’t have it. There are logs about station moves and care at some according to Steve, but that isn’t calibrating stations.
        What I tried to do is exploit the data I had, and not just repeat the same process the others have used, I think we’ve seen if you do the same basic things, you’ll get the same basic results.
        I take the philosophy that what I do does remove some of possible types of error, and I believe gives me better uncertainty numbers, and fails on the same errors that no one fixes.
        I’ll look for that book.

      • Rob Bradley, systematic error in the air temperature measurements is not my assumption at all. It has been demonstrated in published calibration experiments.

        For example: K.G. Hubbard and X. Lin (2002) Realtime data filtering models for air temperature
        measurements
        Geophys. Res. Lett. 29 (10), 1425 and X. Lin, K.G. Hubbard and C.B. Baker (2005) Surface Air Temperature Records Biased by Snow-Covered Surface Int. J. Climatol. 25, 1223-1236.

        Those do not exhaust the published surface station sensor calibrations. They all show non-normal systematic temperature measurement error.

        SST calibrations are more sparse, but those that exist also show systematic errors. For example, J.F.T. Saur (1963) A Study of the Quality of Sea Water Temperatures Reported in Logs of Ships’ Weather Observations J. Appl. Meteorol. 2(3), 417-425.

        The errors are present, they are large, they do not average away, and they make the historical surface air temperature record useless to establish the trend or rate of temperature increase since 1900.

      • The errors are present, they are large, they do not average away, and they make the historical surface air temperature record useless to establish the trend or rate of temperature increase since 1900.

        I don’t agree. They might not be suitable as they are used. But there is useful information to be gleaned from the records.
        The problem is the only they you’ve gotten is a sketchy anomaly based on a lot of stations that don’t exist. If you’re at all interested follow my name, and in the oldest page there at the top is a link to sourceforge.net all of the area reports and code at there. The charts are just a fraction of what’s available. I can build far more reports that need examined than I can do.

      • micro6500, “I don’t agree.

        I cited some published calibration experiments in the reply to Rob Bradley. You can ignore them. You can pass them off. They won’t disappear.

        Neglect error, play a pretence. That’s the law in science.

        They might not be suitable as they are used. But there is useful information to be gleaned from the records.

        Only if you’re interested in temperature changes greater than ±1 degree C. And that’s being generous.

      • Richard Baguley, that 1963 study you disdain was the most extensive investigation, ever, of the accuracy of SST measurements from engine intakes. Does data become invalid because it was measured years ago? Is that how your science works? Do you disdain all air temperatures measured before 1963, too?

        Sauer’s study reveals the error in temperatures obtained from ship engine intake thermometers, that make up the bulk of SST measurements between about 1930 and 1980. The error seriously impacts the reliability of the surface temperature record since 1900, which is what interests us here.

        As to Argo errors, see, for example, R. E. Hadfield, et al., (2007) On the accuracy of North Atlantic temperature and heat storage fields from Argo JGR 112, C01009. They deployed a CTD to provide the temperature reference standard.

        From the abstract, “A hydrographic section across 36 degrees N is used to assess uncertainty in Argo-based estimates of the temperature field. The root-mean-square (RMS) difference in the Argo-based temperature field relative to the section measurements is about ±0.6 C. The RMS difference is smaller, less than ±0.4 C, in the eastern basin and larger, up to ±2.0 C, toward the western boundary.

    • Thomas,

      Even accepting there is as much warming as claimed by those intolerant of any skepticism, I do not find any evidence that it is human-caused. CO2 is rising and global average temperature appears to be rising, but correlation is not evidence of causation.

      The proper scientific course is for those that hypothesize we are experiencing runaway or dangerous warming (due to increased concentrations of CO2 that result in an amplication of warming by inadequately known and potentially completely unknown feedback mechanisms) to show that the warming to date is not consistent with natural causes. I.e., those proposing the hypothesis of runaway warming are actually the ones that have an obligation to demonstrate that any observed warming is not consistent with natural causes. I have seen no evidence they have ever made an attempt to do that.

      • “if you use a broken ruler to measure the growth of a tree, your measurement of the height of that tree might be wrong, but clearly you’ll know that the tree is growing.”

        What if you measure less than half of the tree with your broken ruler, and then guess (excuse me – extrapolate) the rest? Do you clearly know that the tree is growing then?

      • FWIW, I used extrapolate instead of interpolate because I was thinking of the arctic (since that is where the bulk of the warming shows up). I think interpolate is correct word for what happens in the antarctic because there’s a station at the south pole , so they are actually infilling between two knowns, but in the arctic, they are guessing what lies beyond the northernmost stations, which would be extrapolating. Probably doesn’t mean much, but to me interpolate sounds more accurate because your error is somewhat bound by the known on each side, while the errors are almost unlimited when extrapolating.

        Purely hypothetical example of what can happen when extrapolating: suppose someone was to take the temperatures during a twenty year recovery from an extreme cooling period (imagine fears of a looming ice age), and then extrapolate that recovery period trend indefinitely into the future (I know no one would actually do that – I said it was hypothetical). Why the projections would be ridiculous, and would serve as a warning to would be extrapolaters for decades.

  33. The point of the graphs shows that an El Niño episode can definitely increase temperatures around the globe. i.e. the oceans are ejecting heat into the atmosphere.

  34. A bit of an exaggeration using this projection which badly distorts the distances at the poles. Better to use an equal area projection:

  35. UAH is the data that contradicts the surface temperature data sets that Nick and Zeke favor. Two thirds of the surface of the Earth is ocean. At best, 25% of the globe has surface temperature data, and only for a short period of time. The rest is manufactured by people like Zeke and Nick. Imagine if a Pharmaceutical company invented 75% of their clinical test results. How would you feel about that?

    The Climategate emails clearly illustrated the corruption of this field of science. It was all there, in their own words. If you choose to ignore it, you are either naive or similarly politically corrupt.

    No one disputes that the Earth has warmed since the LIA, but show me one model that accurately predicted (unadjusted) global temperature over the past thirty years — you can’t, I’ve tried.

    Nick and Zeke post here because this is, by a large margin, the most widely viewed climate science website. And unlike Gavin’s site and the SKS site, which heavily censor comments and opposing viewpoints, Anthony encourages an open an honest debate–that’s a quick litmus test to determine who seeks the truth and who fears it.

    • “And unlike Gavin’s site and the SKS site, which heavily censor comments and opposing viewpoints, Anthony encourages an open an honest debate–that’s a quick litmus test to determine who seeks the truth and who fears it.”

      That’s right.

  36. It seems that determining temperatures at the Arctic from measurements at lower latitudes is “extrapolation”, not “interpolation”. Extrapolating outside of your measurement range is much less reliable than interpolating to a point between measurements.

    • 1200 km is the distance over which air temperature is correlated to R>0.50. Jim Hansen published on that 30 years ago, J. Hansen and S. Lebedeff (1987) Global Trends of Measured Surface Air Temperature JGR 92(D11), 13,345-13,372.

      The scatter width at correlation R= 0.5 was pretty large, though, putting quite an uncertainty into any extrapolation that far out.

      That uncertainty is not propagated into the interpolated temperatures. Yet one more analytical failure of the air temperature group.

  37. According to my calcs the arctic ocean covers 2.8% of the total global surface area. The area north of the arctic circle covers 4%. What area are they talking about in regards to it having escalated global temperature? Either way it is going to have to be a very large positive anomaly to influence the global average by much. I will leave the calcs to others better endowed.

    • Michael Carter January 19, 2017 at 8:47 pm
      According to my calcs the arctic ocean covers 2.8% of the total global surface area. The area north of the arctic circle covers 4%. What area are they talking about in regards to it having escalated global temperature?

      And the US is 1.9% so the Arctic is more significant than the US which was also featured in the head post.

  38. J Mac,
    Long ago I proposed that this correlation graph deserved advanced inspection for effects bearing on the correlation values that might not be climate related.
    https://webcloud37.au.syrahost.com:2083/cpsess3526699251/frontend/Dreamscape/filemanager/showfile.html?file=BEST_correlation.jpg&fileop=&dir=%2Fhome5%2Fgeoffstu%2Fpublic_html&dirop=&charset=&file_charset=&baseurl=&basedir=
    Rule of thumb only, in harder earth science work, correlation coefficients above 0.75 or so are preferred. This would reduce the range to 600 km, the area to a quarter, but still rings suspicious for the use to which it is put.
    Anyone know of a critical examination of the graph apart from the reasonable words from its Best authors?

  39. 1. You make the mistake of focusing on CONUS. if you include the entire US you have a record

    For reference here is every country

    2. You make a mistake of using GHCN version 2 !!!! that data is deprecated. There are plenty of stations
    near the arctic IF you.
    A) Use The latest NOAA data
    B) Use ALL the stations
    C) Use NON NOAA stations. Early on we discovered that there are many stations that NOAA doesnt
    have in its GHCN daily datasets. In fact we’ve done NOAA versus NON NOAA studies
    ( Guess what)

    3. The arctics was particularly warm and its shows the flaws in interpolation methods that are
    area based. CRU for example interpolates only within cell boundaries. A simple example
    will illustrate the problem

    Suppose you have a station at 84North. -152.5W In the CRU approach that will be interpolated
    1 degree to the north ( about 110km) and 4 degrees to the south. and it will be interpolated
    2.5Degress to the east and west.. But at that latitude 2.5 degrees east and west is a very short
    distance.. At the equator it would be interpolated ~ 260km east and west. So the amount
    of interpolation they do is a function of latitude and not strictly speaking on distance. Another way
    to look at it is this. If they centered their grid differently that point at 84 North would be interpolated
    2.5 degrees north and 2.5 degrees south. IN other words the weight of their stations changes with
    latitude and the gridding method. In a spatial statistics approach you dont have this problem as
    the temperature is a function of latitude and you grid afterwards.

    Normally the CRU approach is ok the only issue happens when the areas they dont cover
    are warming faster than the rest of the planet.

    To get a sense of how warm, the arctic is we can use a favorite chart here at WUWT

    Note this is based on the approach that Judith Curry prefers for the arctic physics based
    estimation

    So ya Its warmer in the Arctic.. CRU miss that because in their approach they assume that
    the arctic warms as fast as the rest of the planet..

  40. ” AndyG55 adds this analysis and graph in comments”
    And Donald Klipstein pointed out that the USCRN trend, as marked on the plot, is 4.86°C/Century, a very high warming rate indeed. And about three times the satellite rates, which were also far from low. So I don’t see how that helps the argument.

    • Eh, go pull the data instead of eyeballing it. The sum of the twelve monthly 2005 anomalies is +9.34F. The average of those monthly 2005 anomalies is +0.778F.

      Look at the full dataset and the sum of the 11 years is +100F (roughly the same as 11 repeats of 2005), and the overall average anomaly is +0.701F, below the same value for 2005 alone.

      When you realize that they started the USCRN record at an anomaly of +1.75 (+0.8C), having the one year and 11 year average anomalies come in at +0.778 and +0.701 doesn’t indicate rapid warming as you suggest.

      • I didn’t eyeball the data. DK did, and figured 0.46 C.dec. I just read the annotation toward top right. 0.0468C/yr, or 0.468 C/dec. I’m impressed by DK’s eyeball. And that is fast warming. There has been a lot of “if you take this out, then…” lately. But that is what it was. And it had a spike, not a dip, at the start.

        It didn’t support any of DK’s claims. It doesn’t match satellite at all. And it is very far from zero (satellites aren’t close either).

  41. Please note that in the USCRN data, the ” true zero point” is really +0.8, since that is where they start the data set.

    So when December dips below “0 anomaly”, it is actually closer to -1.0 true anomaly from the actual starting point of the data series.

    Yet another example of the Warmists polluting the most pristine data with garbage data, by using less pristine stations to set the starting point.

  42. I still think all this BS was already in the works to give HRC a kick-start towards guilting the nation into submission to UN domination. Now it has become their last-ditch effort to retain what portion of the populus who still are entranced believers in the fantasy of human climate culpability.

  43. Science is about observation, experimentation and replication.

    The easiest way to settle matters would be to select in each country say a dozen or 20 best sited stations (rural/free from encroachment of UHI) with no station moves and with the best record keeping standards, and then to retrofit these stations with the same LIG thermometers as were used in the late 1930s/early 1940s.

    One would then replicate by observing, for the next few years, using the same TOB as used in the 1930s/1940s and taking readings in Fahrenheit using the same LIG thermometers.

    In this manner there would be no need for any adjustments. Just use the raw data from the late 1930s/early 1940s, and compare that to the data collected between say 2017 to 2022.

    One would not make a global temperature set. Just a different set for each location, and see what has happened at each of those locations.

    Within 5 years we would have a very good insight into how much warmer temperatures have become since the highs of the late 193s/early 1940s.

    • TOB only applies in the US and a couple other countries.

      Next, Even though sateliites have gone through More changes than surface observing changes its funny that you dont suggest launching new satellites with old instruments.

      We already have 10+ years of pristine stations (CRN) to compare with “bad” data..

      Guess what.. The “bad” stations are good

      [so you say – mod]

      • Wow that’s lame Steven!

        We do all realise that putting old style instrumentation in parallel with new style instrumentation as proposed would be just a tad cheaper than launching a new satellite, don’t we?

        In hindsight there really should have been periods of running the old and the new instrumentation in parallel, and when sites were moved. But who would ever have guessed that people would come along pretending to calculate the temperature of the earth to two decimal places.

  44. Reblogged this on WeatherAction News and commented:
    As Bill Illis writes in the comments:

    The two most northerly stations are Eureka Canada (84N) and Svalbard (78N).

    They both had very warm years about 6C above normal in 2016 (yes I checked). Probably a fluke more than anything else but they also have very variable year-by-year records, just like every station. +/- 6.0C is not that unusual for these two stations.

    BUT, this does not mean the entire Arctic Ocean was 6C above normal in 2016. If that was the case, ALL of the sea ice would have melted out this summer. At best, the Arctic Ocean was 1.0C above normal, probably just 0.5C.

    This extrapolation technique across the polar oceans is completely BS.

    There are physical signs that have to be evident to show any ocean area being so far above normal.

    Meanwhile I’m left wondering how much of this heat (record or not) was vented into space?

  45. Taking data from Wunderground – a selection of some of the longest running stations I can find, I get:

    2016 was…
    Brampton village: 9th out of the last 17 years
    Manchester city: 12th out of the last16 years
    Bedford rural: 10th out of the last 12 years
    Derbyshire rural: 10th out of the last 14 years
    Suffolk rural: 6th out the last 12 years
    Taunton town: 5th out the last 15 years
    Lancashire coastal: 8th out of the last 15 years

    No record heat in England – apart from when a 747 warmed up the thermometer at Heathrow during the summer – and all trending downwards
    Warmest year for most of them was 2006

    • Now THAT’s an analysis of data. Bravo. It really shouldn’t be so difficult for an actual scientist to emerge who tests the “warmest evah” single manufactured temperature conjecture using a different methodology.

      How about it Nick? Mosher? Zeke? You are scientists are you not?

    • According to the widely quoted Central England Temperature (CET), 2014 was slightly warmer than 2006, with 10.93 degrees, against 10.82.

      2016 came in at 10.31, the 9th warmest out of the last 16, and was unremarkable throughout. Three months had temperatures lower than the 1981-2010 mean (March, April and November).

      The most recent 10-year mean is 10.08 degrees. The 10-year mean has been above 10 degrees since 1988-97, and never before that since the series began in 1659, and it .peaked at 10.46 in 1997-2006

      The 1930s had an average temperature of 9.62 degrees C

  46. This morning the BBC reported the temperature in Farnborough (near London, S of England) as -6 degC and in Edinburgh in Scotland it was +6 degC, of difference of 12 degC

    The distance between those two places is 550 km. Now explain to me the justification of interpolating data out to 1200 km.

    • Every winter where I live I see a 12 degree Fahrenheit difference (7 degrees Celsius) after driving a measly mile from the center of a tiny little 6,000 person town to just outside of town. UHI is massive in the winter in tiny little rural towns in the USA. But these all knowing scientists just smear for 1200 km’s, and tell us that rural places do not receive much at all in UHI.

    • Look at the difference between the UK and Spain. say Madrid which is no more than about 1200km. They have very different temperatures.

      Of course, the warmists argue that the anomalies are similar over these distances, however, whilst it is likely that the anomalies may be more similar, geography and topography no doubt plays an important role.

      If a station is influenced by it position relative to the oceans, or winds/weather fronts coming mainly from one direction, it is very unlikely to have similar anomalies to stations which are not impacted by their position relative to oceans, or where winds/weather fronts usually come from a different direction.

      Infilling definitely gives rise to potentially wider error margins.

      • 90+ % of the variance in monthly temperature is explained by LATITUDE and Elevation of the station.

        Check the latitude of spain and the UK.

        Duh.

      • Steven, there are multi-degree differences over even short distances. That is why station data is fiddled with when a station moves a few hundred metres. Rutherglen for example.

        Duh yourself. Think harder!

      • Forrest Gardener on January 20, 2017 at 12:43 pm

        Could you PLEASE stop writing about your stupid Rutherglen?
        I started 1903 and ended… 1921.
        That’s nearly a century ago.

      • richard verney on January 20, 2017 at 2:21 pm

        What are the latitudes of the stations that infill the Arctic?

        Why do you expect other people to do your job? Look at
        ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/

        The three northernmost GHCN stations are:
        22220046000 80.6200 58.0500 20.0 GMO IM.E.T.
        40371082000 82.5000 -62.3300 66.0 ALERT,N.W.T.
        43104312000 81.6000 -16.6700 34.0 NORD ADS

        Averaged trend: 0.7 °C / decade

      • Regarding Steve Mosher response:

        “90+ % of the variance in monthly temperature is explained by LATITUDE and Elevation of the station.”

        Err…in the example I gave the two locations are at pretty much the same elevation. As regards latitude, UK geography may not be a strong point for US based posters, but I can assure you that Scotland is further North than Southern England.

        Edinburgh +6 degC Latitude 56 degN
        Farnborough -6 degC Latitude 51 degN

  47. Hotttttttessssssssst innnnnnnnnnnnnnnn onnnnnnnnne hunnnnnnnnnnnnndred twennnnnnnnty thouuuuuuuuusand yearrrrrrrrrrrrrrrs!

  48. Where could I find a table for the last 15 years showing world temperatures- also hottest and mimimums please? Satellite and land based.

  49. The latest UAH satellite anomalies for the Arctic region have extremely high temperatures.

    For UAH satellite, 2016 was definitely the warmest annual average, and also had the 2 warmest months in the record January and October. October was the warmest, and both are the only two months in that data series where the anomaly exceeded 2 degrees C above average (UAH baseline).

    Satellite data have the Arctic region warming the strongest – more than twice as much as global – so it appears that the surface records are not inventing warm blobs through interpolation, but rather are corroborated by the satellite record.

Comments are closed.