@NOAA data demonstrates that 2016 was not the 'hottest year ever' in the USA

Today, there’s all sorts of caterwauling over the NYT headline by Justin Gillis that made it above the fold in all caps, no less:  FOR THIRD YEAR, THE EARTH IN 2016 HIT RECORD HEAT.

nyt-record-heat-2016

I’m truly surprised they didn’t add an exclamation point too. (h/t to Ken Caldiera for the photo)

Much of that “record heat” is based on interpolation of data in the Arctic, such as BEST has done. For example:

But in reality, there’s just not much data at the poles, there is no permanent thermometers at the North pole, since sea ice drifts, is unstable, and melts in the summer as it has for millennia. Weather stations can’t be permanent in the Arctic ocean. So, the data is often interpolated from the nearest land-based thermometers.

To show this, look at how NASA GISS shows data with and without data interpolation to the North pole:

WITH 1200 kilometer interpolation:

2016-giss-1200km-interpolation

WITHOUT 1200 kilometer interpolation:

2016-giss-250km-interpolation

Here is the polar view:

WITH 1200 kilometer interpolation:

2016-polar-giss-1200km-interpolation

WITHOUT 1200 kilometer interpolation:

2016-polar-giss-250km-interpolation

Source: https://data.giss.nasa.gov/gistemp/maps/https://data.giss.nasa.gov/gistemp/maps/

Grey areas in the maps indicate missing data.

What a difference that interpolation makes.

So you can see that much of the claims of “global record heat” hinge on interpolating the Arctic temperature data where there is none. For example, look at this map of Global Historical Climatological Network (GHCN) coverage:

GHCN-paucity-stations-poles

As for the Continental USA, which has fantastically dense thermometer coverage as seen above, we were not even close to a record year according to NOAA’s own data. Annotations mine on the NOAA generated image:

 

2016-2012-conus-temperature

Source: https://www.ncdc.noaa.gov/cag/time-series/us/110/0/tavg/ytd/12/1996-2016?base_prd=true&firstbaseyear=1901&lastbaseyear=2000

  • NOAA National Centers for Environmental information, Climate at a Glance: U.S. Time Series, Average Temperature, published January 2017, retrieved on January 19, 2017 from http://www.ncdc.noaa.gov/cag/

That plot was done using NOAA’s own plotter, which you can replicate using the link above. Note that 2012 was warmer than 2016, when we had the last big El Niño. That’s using all of the thermometers in the USA that NOAA manages and utilizes, both good and bad.

What happens if we select the state-of-the-art pristine U.S. Climate Reference Network data?

Same answer – 2016 was not a record warm year in the USA, 2012 was:

2016-uscrn-annual-temperature

Source: https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&parameter=anom-tavg&time_scale=p12&begyear=2004&endyear=2016&month=12

Interestingly enough, if we plot the monthly USCRN data, we see that sharp cooling in the last datapoint which goes below the zero anomaly line:

2016-uscrn-temperature

Source: https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&parameter=anom-tavg&time_scale=ann&begyear=2004&endyear=2016&month=12

Cool times ahead!

Added: In the USCRN annual (January to December) graph above, note that the last three years in the USA were not record high temperature years either.

Added2: AndyG55 adds this analysis and graph in comments

When we graph USCRN with RSS and UAH over the US, we see that USCRN responds slightly more to warming surges.

As it is, the trends for all are basically identical and basically ZERO. (USCRN trend was exactly parallel and with RSS and UAH (all zero trend) before the slight El Nino surge starting mid 2015 )

2016-uscrn-rss-uah

 

 

0 0 votes
Article Rating
282 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
January 19, 2017 11:30 am

One thing that I am confused on — I thought that NOAA added approximations of the Arctic temperatures based on land station data at the Arctic Circle. My assumption comes from the Thomas Karl report which changed the data adjustments. Most people focused on the difference between ocean intakes and bucket samples, but I thought that report also added the Arctic Ocean temperature assessments based on the land station data.
Is this correct?

talltone
Reply to  lorcanbonda
January 19, 2017 2:30 pm

Could someone explain the eemian interglacial period please where co2 was half todays level and temperature was 8c warmer than today. This lasted for 30000 years. Makes this rubbish insignificant.

Dave
Reply to  talltone
January 20, 2017 5:08 am

8°C is an exaggeration. Parts of central and norther Europe were probably 2 ° warmer, but modelling suggests it was actually cooler than present at lower latitudes (Kaspar et al. 2005, doi:10.1029/2005GL022456). And the cause? Not CO2 – the model used in that study included CO2 but assumed pre-industrial. But greenhouse gas levels are only one of the influences on climate: Over time scales of 1000s to 10000s of years, changes in earth’s orbital parameters – the Milankovitch cycles – induce oscillations in climate on at least the same scale as the projected warming from GHGs. Do climate scientists even suggest otherwise? During the Eemian, the orbit showed greater obliquity and eccentricity, hence the difference in influence on high versus low latitudes.
But as you noted, these cycles are rather slow – that period of relatively warm climate in northern Europe probably lasted 10-15000 years. The reason why the current ‘rubbish’ is far from insignificant is that we are talking about current global warming trends over land (particularly at high norther latitudes) greater than 0.2 degrees per decade. These seem like such small numbers on a time scale of a few decades, but even over a century they add up to the same sort of temperatures as seen at the peak of the Eemian. Sea level during the Eemian was 6-9 metres higher than present. Such a large change is something we might gradually adapt to if it takes a few thousand years to creep up on us, but over a few centuries? We had better *hope* the climate scientists have it wrong!

Johann Wundersamer
Reply to  talltone
January 21, 2017 6:56 pm

Dave,
Over time scales of 1000s to 10000s of years, changes in earth’s orbital parameters – the Milankovitch cycles – induce oscillations in climate on at least the same scale as the projected warming from GHGs. Do climate scientists even suggest otherwise? During the Eemian, the orbit showed greater obliquity and eccentricity, hence the difference in influence on high versus low latitudes.
But as you noted, these cycles are rather slow – that period of relatively warm climate in northern Europe probably lasted 10-15000 years.
______________________________________
At this timescales you and your descendants will be dead and gone.
______________________________________

NW sage
Reply to  lorcanbonda
January 19, 2017 5:37 pm

I’m not sure just where those arctic circle stations would be positioned. I spent one summer in the late 1950s at a small Indian village [Beaver, AK] right on the Arctic Circle (well three mi S). The village was also on the banks of the Yukon River right in the middle of Alaska. In the middle of June there was 24 hr daylight and most days were very warm – mid 70s or above. It was also very dry and dusty for lack of rain.
My point is that any temperature data from there would reflect the local topography (flat!) and weather patterns and would likely be very different from those conditions on the Arctic Ocean or on the arctic ice itself.

Reply to  NW sage
January 20, 2017 4:37 am

Here’s one of the articles in the study:
http://www.nature.com/news/climate-change-hiatus-disappears-with-new-data-1.17700

The resulting integration improves spatial coverage over many areas, including the Arctic, where temperatures have increased rapidly in recent decades (1). We applied the same methods used in our old analysis for quality control, time-dependent bias corrections, and other data processing steps (21) to the ISTI databank in order to address artificial shifts in the data caused by changes in, for example, station location, temperature instrumentation, observing practice, urbanization, and siting conditions.

The point being that I was under the impression that the Arctic was now part of the temperature data set.

January 19, 2017 11:35 am

I use FollowThatPage to track changes to http://data.giss.nasa.gov/gistemp/graphs_v3/Fig.D.txt (GISS’s U.S. surface temperature record). Just this morning I got an automatic email, alerting me to the fact that they’ve just changed it again.
http://sealevel.info/2017-01-19_GISS_Fig.D.txt_change_alert.html
From just looking at 1880-1882 vs. 2013-2015, it appears that GISS has cooled the past and warmed the present again, in the U.S. 48-State temperature record.

bw
Reply to  daveburton
January 19, 2017 12:36 pm

Other people are watching changes in GISS data. Every month, approximately 10 percent of the entire GISS historical station record are changed. This has been going on since 2008 at least. That is, out of a random sample of 40 stations, about 3 or 4 stations will show a complete revision of historical data, with various amounts over various years. Not all data is changed, and not by large amounts. Some missing data is backfilled, and some old data are deleted. The next month will have a different 3 or 4 stations with data that are altered from the previous month. Around December of 2012 there was a much larger revision, with many stations showing many more than the usual adjustments, with some recent year data being lost entirely. About 3 months later, some of the lost data was restored, and some other changes were reversed.
Generally, the older data is cooled by some random amount, between 0.1 to 1 degree. The remaining stations show no changes in any old data, just the added monthly update.
The GISS station monthly data were taken from the GISS page, saved in the original space separated text format.
List of stations in the sample.
Akureyri, Amundsent-Scott, Bartow, Beaver City, Byrd, Concordia, Crete, Davis, Ellsworth, Franklin, Geneva, Gothenburg, Halley, Hanford, Hilo, Honolulu, Jan Mayen, Kodiak, Kwajalein, La Serena, Lakin, Loup City, Lebanon MO, Marysville, Mina, Minden, Mukteshwar, Nantes, Nome, Norfolk Island, Nuuk, Red Cloud, St. Helena, St. Cloud, Steffenville, Talkeetna, Thiruvanantha, Truk, Vostok, Wakeeny, Yakutat, Yamba. Most station data were saved in June 2012. Only the 4 Antarctic stations are saved from 2008.

Reply to  bw
January 19, 2017 1:01 pm

Every month, approximately 10 percent of the entire GISS historical station record are changed.

I think they have a batch process that makes the adjustments, but updates based on other stations, every time they add new stations, they’d need to rerun it, and it would make slightly different calculations. I think Mosh mentioned BEST does something like this.

angech
Reply to  bw
January 19, 2017 3:42 pm

Zeke has said that all stations are constantly updated and historically readjusted for the data sets he works with, possibly on a daily basis?
Not sure how this fits in with only 10% of GISS on a monthly basis. Will get back when I have time.

Jared
Reply to  bw
January 20, 2017 6:04 am

And not a single one of them will explain the algorithm they use to adjust for UHI in tiny little towns of 6,000 people. These guys pretend a town of 6,000 is rural and doesn’t receive any noticeable UHI. But I live in a 6,000 person town surrounded by farmland, and in the middle of winter we see from 3 to 12 degrees Fahrenheit UHI in the middle of town compared to just 1 mile away outside of town. So the temp can be 0 degrees in the middle of town and it’s -12 just outside of town. Then just three days later the UHI might only be 3 degrees different. How in the world do Mosher and the boys adjust for this correctly? If they say they adjust by nearby measurements, well aren’t those also effected by UHI? My town of 6,000 is considered rural, yet UHI is very noticeable on cold mornings.

Reply to  bw
January 23, 2017 7:02 am

BW said: “Generally, the older data is cooled by some random amount, between 0.1 to 1 degree. The remaining stations show no changes in any old data, just the added monthly update.”
So then, every month the warming trend is adjusted to show a higher trend? It’s always up? Do you have a revision history to show this… since 2008? That would be around 120 consecutive adjustments always in the same (warming) direction.

DD More
Reply to  daveburton
January 19, 2017 1:22 pm

Dave, you and AW – “we were not even close to a record year according to NOAA’s own data. ” Changed and miss reported, Not even close using NOAA’s own reports. Yearly update time.
(1) The Climate of 1997 – Annual Global Temperature Index “The global average temperature of 62.45 degrees Fahrenheit for 1997″ = 16.92°C.
http://www.ncdc.noaa.gov/sotc/global/1997/13
(2) http://www.ncdc.noaa.gov/sotc/global/199813
Global Analysis – Annual 1998 – Does not give any “Annual Temperature” but the 2015 report does state – The annual temperature anomalies for 1997 and 1998 were 0.51°C (0.92°F) and 0.63°C (1.13°F), respectively, above the 20th century average, So 1998 was 0.63°C – 0.51°C = 0.12°C warmer than 1997 >> 62.45 degrees Fahrenheit for 1997″ = 16.92°C + 0.12°C = for 1998 = 17.04°C
(3) For 2010, the combined global land and ocean surface temperature tied with 2005 as the warmest such period on record, at 0.62°C (1.12°F) above the 20th century average of 13.9°C (57.0°F). 0.62°C + 13.9°C = 14.52°C
http://www.ncdc.noaa.gov/sotc/global/201013
(4) 2013 ties with 2003 as the fourth warmest year globally since records began in 1880. The annual global combined land and ocean surface temperature was 0.62°C (1.12°F) above the 20th century average of 13.9°C (57.0°F). Only one year during the 20th century—1998—was warmer than 2013.
0.62°C + 13.9°C = 14.52°C
http://www.ncdc.noaa.gov/sotc/global/201313
(5) 2014 annual global land and ocean surfaces temperature “The annually-averaged temperature was 0.69°C (1.24°F) above the 20th century average of 13.9°C (57.0°F)= 0.69°C above 13.9°C => 0.69°C + 13.9°C = 14.59°C
http://www.ncdc.noaa.gov/sotc/global/2014/13
(6) 2015 – the average global temperature across land and ocean surface areas for 2015 was 0.90°C (1.62°F) above the 20th century average of 13.9°C (57.0°F)
=> 0.90°C + 13.9°C = 14.80°C
http://www.ncdc.noaa.gov/sotc/global/201513
Now for 2016 and they report average temperature across the world’s land and ocean surfaces was 58.69 Fahrenheit 14.83°C
https://www.washingtonpost.com/news/energy-environment/wp/2017/01/18/u-s-scientists-officially-declare-2016-the-hottest-year-on-record-that-makes-three-in-a-row/?utm_term=.31f17d68fcf5#comments
So the results are 16.92 or 17.04 << 14.52 or 14.52 or 14.59 or 14.80 or 14.83 using data written at the time.
Thanks to Nick at WUWT for the original find.
Which number do you think NCDC/NOAA thinks is the record high. Failure at 3rd grade math or failure to scrub all the past. (See the ‘Ministry of Truth’ 1984).

R. Shearer
Reply to  DD More
January 19, 2017 6:46 pm

Please don’t confuse the issue with actual numbers/data.

Reply to  DD More
January 19, 2017 8:41 pm

Regarding the numbers posted by DD More on January 19, 2017 at 1:22 pm:
1997 temperature 16.92°C, anomaly .51 C means the 20th century average as of then was 16.41 C. The 20th century average would also have to be 16.41 C in order for the 1998 anomaly to be .63 C and the temperature to be 17.04 C.
But for 2010, 2013 and 2014 the 20th century average was stated as 13.9 C.
So, I did a bit of factchecking. I looked at he first link mentioned by DD More above, http://www.ncdc.noaa.gov/sotc/global/1997/13
One thing I saw there was: “Please note: the estimate for the baseline global temperature used in this study differed, and was warmer than, the baseline estimate (Jones et al., 1999) used currently. This report has been superseded by subsequent analyses. However, as with all climate monitoring reports, it is left online as it was written at the time.”
Anomalies are easier to determine than absolute global temperature, as mentioned in https://wattsupwiththat.com/2014/01/26/why-arent-global-surface-temperature-data-produced-in-absolute-form/

bw
January 19, 2017 11:47 am

Using USCRN, 2012 was an anomaly, not part of any multiyear trend.
The USCRN data are reported in degrees Fahrenheit.
Now, plot the entire monthly USCRN temperature record in kelvins, with statistical confidence intervals (error bars) of +/- 0.1 on a Y-scale with a range of 20 kelvins. Certainly no climate change in the entire USCRN record. Current temps are no different from the zero centered mean. The slope of the entire least squares trend line is not significantly different from a line of zero slope. Not that those facts are significant. USCRN stations came online over the period of 2002 to 2006, so only 10 years of full data.
Also, Alaska and Hawaii should be excluded from the lower 48 due to being part of different geological climates. Why include polar and tropical climates into a larger continental temperate zone?? It’s deceptive.
When the USCRN reaches 40 full years of data, it will become a reasonable proxy for Northern Hemisphere temperate zone surface temperature trends.

AndyG55
Reply to  bw
January 19, 2017 1:29 pm

When we graph USCRN with RSS and UAH over the US, we see that USCRN responds slightly more to warming surges.
As it is, the trends for all are basically identical and basically ZERO. (USCRN trend was exactly parallel and with RSS and UAH (all zero trend) before the slight El Nino surge starting mid 2015 )comment image

AndyG55
Reply to  AndyG55
January 19, 2017 1:30 pm

and yes.. al data has been converted to Celsius.

Latitude
Reply to  AndyG55
January 19, 2017 4:49 pm

Thanks Andy!

Reply to  AndyG55
January 19, 2017 9:42 pm

Degree C per decade trends if I am eyeballing the linear trends in AndyG55’s graph correctly, and if these linear trends are correct:
USCRN: .46 degree C / decade
UAH USA48: .18 degree C / decade
UAH USA49: .3 degree C / decade
RSSUSA: .12 degree C / decade
This sounds like supporting a contention that satellite-measured lower troposphere temperature trend underreports surface warming.
Something else notable: USCRN spikes more than the satellite-measured lower troposphere over the US does, and the biggest two spikes in USCRN are not from El Ninos. And in early 2010, US temperature was running below or close to the trend lines, while global temperature was at its second highest during the period covered due to an El Nino. Smoothing this graph by 5 months will improve correlation with global temperature and ENSO activity somewhat, but it will still show USA as not correlating very well with the world over a 12 year period. Also notable is that the US covers 1.93% of the world’s surface, which can explain why US temperatures do not correlate well with global temperatures over a period of only 12 years.

Nick Stokes
Reply to  AndyG55
January 19, 2017 9:59 pm

“Degree C per decade trends if I am eyeballing the linear trends”
Yes, the trend for USCRN of 0.0468 °C/year = 4.68 °C/Cen is marked on the graph, and is very fast warming indeed. And a lot more than the satellites.

AndyG55
Reply to  AndyG55
January 20, 2017 2:57 am

Only a TOTAL MATHEMATICAL ILLITERATE extrapolates a short term small trend out to a century trend.
But it is Nick, so that is to be expected.

Nick Stokes
Reply to  AndyG55
January 20, 2017 9:13 am

“Only a TOTAL MATHEMATICAL ILLITERATE extrapolates a short term small trend out to a century trend.”
Only a mathematical illiterate would fail to understand that 0.0468 °C/year and 4.68 °C/Cen are simply two exactly equivalent of expressing one number, and it is the number written on your graph. The latter merely uses more familiar units.
If you are sprinting at 20 miles per hour, it doesn’t mean you are going to run 20 miles in the next hour.

tony mcleod
Reply to  bw
January 21, 2017 3:42 am

That’s a bit funny. Shouting it just made it funnier.

Pamela Gray
January 19, 2017 11:47 am

ONLY in climate science do we have made-up data. If this were a new drug the researchers would have their license to practice permanently revoked. I’ve done research and published. Making up data, even for my small study would have been grounds for dismissal and revoke of three licenses to practice, plus barred from public employment for life.

Bruce Cobb
Reply to  Pamela Gray
January 19, 2017 12:43 pm

Hide the decline!

Simon
Reply to  Pamela Gray
January 19, 2017 6:06 pm

So Pamela, if you think the data is “made up”…. you will easily be able to point me to a study that agrees with you. Till then you are just blowing hot air and we have enough of that.

Pamela Gray
Reply to  Simon
January 19, 2017 7:42 pm

Smudging and smearing a measurement taken in one area and assigning it to an unmeasured area is making data up. I took direct measurements along the 7th cranial nerve. So let’s say that because I only used 6 subjects, I decided to make it look like I had more. Since in general the brain stem portion of the 7th cranial nerve is similar from one human to another, I decided to assign an actual measurement to a “similar” human subject but didn’t actually measure that similar human. That would have been all I needed to do to end my career over and out. So why do climate scientists get a pass with the Arctic?

Nick Stokes
Reply to  Simon
January 19, 2017 9:46 pm

” So let’s say that because I only used 6 subjects”
Already you are sampling, and that is what is happening here. You actually want to know what effect your drug will have on the entire target population. CS want to know what the temperature is of the whole earth. You can’t test everyone, so you choose a sample. You propose that will tell you about the population, with known uncertainty. CS look at a sample of Earth temperatures, and propose that will tell them about the places unsampled, again with uncertainty. The processes are exactly analogous. You have sampling so built in to your science that you don’t even think about it. But you think it is improper in CS. Why?
The scientists aren’t inflating their sample. They are doing what you do routinely – working out a population average from a sample average.

Reply to  Nick Stokes
January 20, 2017 5:54 am

But you think it is improper in CS. Why?

Because the sampled temperature are not a random sample.

RW
Reply to  Simon
January 19, 2017 11:22 pm

Pamela is correct. They are imputing data. Contrary to what stokes claims, it is not sampling at all. Imputing data is serious business with text books written on the subject. Spatial smoothing as a means to filter away high frequency noise (and signal) is not necessarily wrong, but in this case smoothing for the purpose of imputing data is laughably effed up. Sorry, but given the kind of complexity evident in the Earth’s temperature in the samples and the gigantic swathes of missing at the poles, data is not something you can just in-fill with some half-baked frankensteinian spatial smothing kernel to give you some uniformly roasting anomolies. Total junk product.

Jared
Reply to  Simon
January 20, 2017 6:43 am

Nick Stokes, I live in a 6,000 person town surrounded by farmland, and in the middle of winter we see from 3 to 12 degrees Fahrenheit UHI in the middle of town compared to just 1 mile away outside of town. So the temp can be 0 degrees in the middle of town and it’s -12 just outside of town. Then just three days later the UHI might only be 3 degrees different. How in the world do you get an accurate measurement, AND NOT MAKE UP DATA and get the correct temperature, (0 or -12) and 3 days later was it (6 or 9)? If you pick it by nearby measurements then you are making up data. Pamela is 100% right. And you might want to know UHI is pretty big in tiny rural towns even though you and your buddies want to pretend rural towns really don’t get UHI.

Nick Stokes
Reply to  Simon
January 20, 2017 9:25 am

“How in the world do you get an accurate measurement”
Who is claiming to create an accurate measure in your town? What they do want is an estimate of the average anomaly in your region, which will then go into the global average. Not the temperature, the anomaly. If your town is having warm weather, the countryside probably is too. UHI doesn’t change that.

Jared
Reply to  Simon
January 20, 2017 10:58 am

100% wrong Nick Stokes. How are you supposed to know the anomaly when you don’t even know if it was -12 degrees or 0 degrees. That is a huge difference. My town went from 1000 people in 1900 to 6000 today. Most likely the anomaly you think you are measuring is actually UHI. Maybe you should try doing some field work and taking actual measurements and figuring out what UHI is doing to the actual record. So what is your algorithm to correct for UHI in towns of only 6,000 people? I hope you are correcting for that 12 degrees of UHI you are measuring.

Nick Stokes
Reply to  Simon
January 20, 2017 2:42 pm

“How are you supposed to know the anomaly when you don’t even know if it was -12 degrees or 0 degrees.”
The anomaly is the difference between what you measured that month, and some average of the history of monthly readings at that site. Both of which you know. The temperature may be affected by UHI, altitude, hillside placement, whatever. The anomaly calculation is the same.

Jared
Reply to  Simon
January 21, 2017 5:00 am

Yes, and in 1900 my town had 1,000 people and today it has 6,000 people, So you are comparing 2017 with an extra 12 degrees due to UHI with 1900 that may have had an extra 3 degrees due to UHI and saying the anomaly says it’s warmer by 9 degrees even though the temp was actually the same and UHI caused all the difference. How in the world are you possibly correctly adjusting for that?

Tom in Indy
Reply to  Pamela Gray
January 20, 2017 9:10 am

Nick Stokes January 19, 2017 at 9:46 pm
You can’t test everyone, so you choose a sample. You propose that will tell you about the population, with known uncertainty. CS look at a sample of Earth temperatures, and propose that will tell them about the places unsampled, again with uncertainty. The processes are exactly analogous.
No, you could tell us the average temperature of the earth based only on sampled sites. This is much different, than extending the measurements from sampled sites to unsampled sites and then claiming that average temperature is a weighted average of sampled sites + unsampled sites. Worse yet, the result is widely reported as if it is the actual average from sampling.
The unsampled result is an “inferred” average temperature. You are using inference and assumptions to change the average or your sample into something other than the average of the sample.

Nick Stokes
Reply to  Tom in Indy
January 20, 2017 9:19 am

“No, you could tell us the average temperature of the earth based only on sampled sites.”
That is exactly what they do. They don’t actually create “unsampled sites”, though it wouldn’t matter if they did. GISS has a rather complicated way of averaging over regions (boxes) and then combining those averages. But the end effect, however you do it, is an area-weighted average of the original data.

January 19, 2017 11:52 am

“A gang is often where a coward goes to hide.”
Anonymous

RERT
January 19, 2017 11:55 am

The map of stations makes me wonder: is the distribution close enough to that of people for a simple arithmetic average of station temps to approximate a population density weighted average? Even if it doesn’t, the latter is surely an interesting statistic: the temperature people are actually feeling.

tony mcleod
January 19, 2017 11:56 am

Try this interpolation.
https://youtu.be/c6jX9URzZWg

MarkW
Reply to  tony mcleod
January 19, 2017 12:06 pm

Which is precisely what happens every time the PDO goes into a warm phase. That goes double for the year or so following a major El Nino.
As always, the trolls try to take a perfectly normal event and try to use it to prove their pet religion.

tony mcleod
Reply to  MarkW
January 19, 2017 12:49 pm

Wrong on three counts. That’s a hat-trick where I come from.

AndyG55
Reply to  MarkW
January 19, 2017 1:38 pm

LOL, and there is McClod batting zero from 100.

MarkW
Reply to  MarkW
January 19, 2017 2:11 pm

tony, why do you persist in demonstrating how you have no knowledge of how even the most basic of climate processes work?

jimmy_jimmy
Reply to  tony mcleod
January 19, 2017 12:11 pm

so….10y of arctic ice data on planet 3 billion yrs old

MarkW
Reply to  jimmy_jimmy
January 19, 2017 2:12 pm

A bit over 4.5 Billion.

SMC
Reply to  jimmy_jimmy
January 19, 2017 6:11 pm

A billion here, a billion there…Who’s counting. 🙂

John F. Hultquist
Reply to  jimmy_jimmy
January 19, 2017 10:11 pm

The Central American Seaway closed several million years ago, giving us the Isthmus of Panama (say about 3 M to 2.7 M years). Prior to that closure, with our 2 great oceans connected, might we expect the Arctic Ocean to have been under some different influences?
The full age of Earth is thought to be 4.54 ± 0.05 billion years.
That ± 0.05 billion is a wide range.
None of these numbers is of much interest when writing about the ice floating on the Arctic Ocean.

Chimp
Reply to  tony mcleod
January 19, 2017 12:12 pm

What wildlife and people have suffered any harm there from “climate change” since 1990?

Harry Passfield
Reply to  Chimp
January 19, 2017 12:49 pm

You mean, not including those that who were harmed by the ‘mitigation’ for AGW by those who think they know better?

Chimp
Reply to  Chimp
January 19, 2017 2:14 pm

Yes. Those poor people and other living things suffering the tyranny of the anti-life Green Meanies would surely vote for more fossil fuel pipelines and drilling if allowed, to include caribou and bears.

Darrell Demick
Reply to  Chimp
January 19, 2017 2:15 pm

Great question, Chimp. Suffering can take on many forms ……
My perusal of the internet identifies that between 140,000 and 300,000 birds are killed annually by wind farm turbine blade strikes in the United States. Based on the fines imposed on the Syncrude Oil Sands project in 2010, C$3 million (US$2.25 million – current exchange rate) for the very unfortunate death of 1,606 ducks and geese that landed in a tailing pond when the deterrent system was inoperative, by extrapolation wind farms should be paying between US$195 million to US$420 million in fines.
ANNUALLY!!! I am sure that the majority of these birds are sparrows and swallows, however birds of prey who are on the “watch” or “endangered” list are killed trying to capture and eat the smaller birds.
This is the cost of PERCEIVED climate change, killing wildlife with no consequence to the power generation companies (merely swept under the carpet), aside from forcing unreliable, overly-expensive renewable energy onto the consumer.

Reply to  tony mcleod
January 19, 2017 12:30 pm

Everybody implies that the Arctic is melting. When you look at this animation, it becomes very clear that the older ice is not ‘melting’; rather it is flowing out of the Arctic between Greenland and Spitsbergen. This is due to Arctic Ocean currents, not some massive climate change.
I don’t know what else you can “prove” with this animation, but it seems patently obvious that it is not supporting the claims.

AndyG55
Reply to  lorcanbonda
January 19, 2017 1:38 pm

Basically, it I only the Kara sea that is struggling to build ice this year.
There is a perfectly ordinary jet stream wobble with a high pressure system over the Kara sea causing a slightly “less cold” region.
But that means there is a “more cold” anomaly drifting about elsewhere, as many in the USA and Europe have discovered.
ie ITS WEATHER !!!!!

AndyG55
Reply to  lorcanbonda
January 19, 2017 2:06 pm

Seems to be a mix of sea temp and air temp.. at least in this case.
Wind has some effects too , like compacting or spreading
http://cci-reanalyzer.org/wx/DailySummary/#T2
Look at the link. see what you think.

MarkW
Reply to  lorcanbonda
January 19, 2017 2:13 pm

All the troll cares about is that there is less ice now, why that may be isn’t relevant.
In it’s mind, less ice is proof that CO2 is going to kill us. Any other explanations just show that you reject science.

AndyG55
Reply to  tony mcleod
January 19, 2017 1:33 pm

McClod.. Did you know that during the first 3/4 of the Holocene, Arctic sea ice was often zero in summer?
McClod… Did you know that 1979 was up there with the EXTREMES of the LIA?
McClod… do you have any historic perspective whatsoever?

Zachary H
Reply to  tony mcleod
January 19, 2017 3:14 pm

Age of ice is utterly irrelevant to polar flow conditions because it’s a flow. THIS IS POLAR UNDERSTANDING 101: POLAR ICE CYCLES OUT and NEW ICE CYCLES IN to the point that the NORTH POLE is NEARLY ICE FREE at TIMES.
You’re f****g INSANE if you thought that was important. Again: THIS is POLAR UNDERSTANDING 101: SPIN of the EARTH and total conditions often suck warm air up and over the ENTIRE NORTH POLE AREA and there has never been a time known when men were there, that this hasn’t been the case.
Your own graphic PROVES the POINT: it’s a GIANT SPINNING BOWL with an OUTLET: and from yeat to year it’s simply a matter of W.I.N.D.S. whether ANY ice survives.Entire regions of the polar north are often seen either very lightliy iced or not iced at all, to then fill in and never be seen that low for a decade or more.
The notable thing about almost all climate freakers is the stunning lack of grasp on even fundamentals of scientific reality.

Pamela Gray
Reply to  tony mcleod
January 19, 2017 3:29 pm

Must I continue to point out the obvious? At the peak of our current inter-stadial period I have no choice but to accept the null hypothesis: Current warming patterns are to be expected under natural conditions. The low ice conditions are natural occurring phenomenon in an interstadial peak period. Unfortunately the lack of Arctic ice, even ice free conditions, indicates imminent (likely within the next few thousand years) stadial slide. It is up to you to tell us why, based on previous paleodata that something has changed. And don’t go on about CO2. It is always high at the peak and always low at the trough. One more thing. Don’t bring up minute changes in CO2 parts per million change unless you can show that there is enough energy in the anthropogenic portion to stem the enormous cyclical periods of the past 800,000 years.

tony mcleod
Reply to  Pamela Gray
January 21, 2017 3:57 am

What has changed is the rapidity of the change. The probability the warming of the past few decades is just natural variation is not zero and its not 100% but it lessening with each warmer than average year.
I don’t think anyone would be too concerned if the Arctic took a few thousand years to thaw. Its happening a bit quicker than.

Bill Illis
Reply to  tony mcleod
January 19, 2017 4:28 pm

The Arctic Ocean was NOT 6C above normal in 2016.
Do you know what would have happened had the Arctic Ocean been 6.0C above normal. The sea ice would have completely melted out in early June and it would not have frozen back at all until November.
These numbers are completely physically impossible and somebody needs to pay a price for such BS. I propose a 3 month salary package. Somebody should explain this to Trump.

Bill Illis
Reply to  tony mcleod
January 19, 2017 4:43 pm

This episode reminds me of the big Crack in the Larsen C ice-shelf which has been in the news recently.
Do you know that this crack has been there for probably 100 years. The ice-shelf expands out over a mountain chain that goes below sea level half way out on the shelf. As the shelf moves out, big pressure ridge cracks form up every 5 kms or so along the mountain chain.
As the ice-shelf moves out to sea, the cracks go along with it. There are at least 6 very similar cracks coming in right behind the big crack that is the subject of the current news.
This big crack has moved 20 kms out to sea since 1984 and going by the way it moves out to sea, it probably formed at least 100 years ago.
The shelf has actually grown by about 20 kms in the last 32 years. That is the opposite of “melting”. It is growing. Yeah, it is going to produce an ice-berg soon since it has expanded so far out to sea. WHY did no story produced by the “climate scientists” talk about this.
Before your own eyes, the area in question. The far right crack is the one they are talking about but there are at least 6 more coming behind it and going by where they form for the first time, it has taken at least 100 years for the crack in question to get to where it is.
https://earthengine.google.com/timelapse/#v=-68.2413,-62.12917,6.717,latLng&t=0.14

January 19, 2017 11:59 am

As for the Continental USA, which has fantastically dense thermometer coverage as seen above

Until you zoom in, and then you see many are 100 – 200 miles apart.

MarkW
Reply to  micro6500
January 19, 2017 12:06 pm

Especially in the southwest.

Kurt in Switzerland
January 19, 2017 12:00 pm

What a tangled web we weave…

Brett Keane
Reply to  Kurt in Switzerland
January 19, 2017 1:59 pm

@Kurt: the Gordan Knot Solution is about to be applied…..

Reply to  Brett Keane
January 19, 2017 4:15 pm

In the US. Possibly the UK. Still leaves a lot of knot.

Hivemind
Reply to  Brett Keane
January 20, 2017 4:08 am

Can I borrow the sword? The Australian BOM needs to be cut.

Eric Simpson
January 19, 2017 12:04 pm

At first look the Arctic interpolations looks like pure bs. As do ALL their other data manipulations that in almost every case conveniently cools the past and warms the present. Worst for them is that in a private email the Chicken Littles were exposed as willfully desiring to manipulate the temperature record to further their leftist cause:

ClimateGate email: Warmist Tom Wigley proposes fudging temperature data:
2009: “If you look at the attached plot you will see that the land also shows the 1940s blip.. So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean … [Tom Wigley, to Phil Jones and Ben Santer] http://tomnelson.blogspot.com/2011/12/climategate-email-warmist-tom-wigley.html

Lo and behold, after that email … THE “BLIP” WAS REMOVED!
These ideologically driven doomsayers are guilty as sin. It was warmer in the 1930s. There’s no global warming.
Their temperature record is pure garbage. So is their endless fear-mongering. Over and over again the top alarmists have been caught telling their acolytes that they all should offer pure lies in shameless fear-mongering:

“We have to offer up scary scenarios… each of us has to decide the right balance between being effective and being honest.” -Stephen Schneider, lead IPCC author, 1989

Their data is lies. Their scare-mongering about the future is lies.

J. Szenasi
Reply to  Eric Simpson
January 19, 2017 6:11 pm

Tom Wigley the Englishman is another fraud who very rapidly went to ground and hid out awaiting the day when his grants scams and infrastructure documents falsification were uncovered. He and Jones and Hansen and Mann were some of the inner 14 to 18 government employees who were enriching themselves using the scam when Al Gore’s movie came out – their bluff was called – and the world began to examine the FRAUD THEY HAD BEEN PERPETRATING for over TWO DECADES related to COMPUTER FRAUD
getting GRANTS
to use GOVERNMENT SUPERCOMPUTERS under their OWN OVERSIGHT during BUSINESS HOURS
paying the RENT (electricity to run the computers) with the GRANTS
POCKETING the REST.

January 19, 2017 12:07 pm

What the temperature data managers would like to tell you:

Pay no attention to that data manager behind the curtain !

[youtube https://www.youtube.com/watch?v=YWyCCJ6B2WE&w=560&h=315%5D

Dave
January 19, 2017 12:07 pm

So what about HadCruT? They don’t do any infilling of areas where there are no stations, they just take an average based on places there are measures. What does that say?

Stephen Richards
Reply to  Dave
January 19, 2017 12:34 pm

But I think they use NOAA NASA data for everywhere except the UK

Nick Stokes
Reply to  Dave
January 19, 2017 2:57 pm

HADCRUT uses CRUTEM data on land. But arithmeically, it is always interpolated. You have to calculate the spatial integral of a function defined by a finite set of values (which is all you’ll ever have). You can interpolate and integrate that. Or something else. It makes no difference. It always ends up as an area-weighted average.

tetris
Reply to  Nick Stokes
January 19, 2017 5:19 pm

Nick
Ever considered the elementary and scientifically sound option of acknowledging that we have no reliable data for the polar regions and work only with what verifiably trustworthy data we actually have?

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 6:47 pm

“Ever considered the elementary and scientifically sound option of acknowledging that we have no reliable data for the polar regions”
You never have perfect data. And in practical continuum science, you have only a finite number, often small, of measurement points, and everywhere else, everywhere, has to be estimated. Usually interpolated. So the elementary requirements are:
1. Use the best estimate you can, using nearby data where possible.
2. Figure out how much uncertainty arises in the integral (average) from that estimation.
And that is what they do. When GISS etc quote a 0.1C error, or whatever, that is almost entirely “coverage”. Which is the uncertainty of interpolation.
The elementary and scientifically sound option is to read what they actually do, and why.

Udar
Reply to  Nick Stokes
January 19, 2017 9:04 pm

1. Use the best estimate you can, using nearby data where possible.
2. Figure out how much uncertainty arises in the integral (average) from that estimation.

But we don’t have any nearby data at the poles, and as a result we can not even estimate the uncertainties that arise! Neither 1 or 2 apply for Arctic/Antarctic areas, and those are where most of warming is found.
Why not exclude them completely? Or at least provide 2 products, one of them with pole areas excluded?

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 9:36 pm

Udar,
“Why not exclude them completely?”
You can’t exclude them and claim a global average. If you just leave numbers (eg grid cells) out of an averaging process, that is equivalent to assigning to them the value of the average of the others. You can see that that would give the same answer, and it becomes your estimate. And then you can see that it isn’t the best estimate, by far. I go into this in some detail in this post and its predecessor. It’s better to infill with the average of the hemisphere, or somewhat better, the latitude band. But best of all is an estimate from the nearest points, even if not so near. Then you have to calculate the uncertainty so that you, and your readers, can decide if it is good enough. But it’s your best.

RW
Reply to  Nick Stokes
January 19, 2017 11:37 pm

Udar pointed out that the in-filling in the arctic was made up entirely of large positive anomolies. That just doesn’t pass the eye test when you look at the high spatial frequencies (rapid changes in temperature across short distances) evident in the ‘raw’ data. Too few real data points to make infilling on that scale reliable.

John Endicott
Reply to  Nick Stokes
January 20, 2017 5:26 am

And when the majority of your “trend” lies (in all meanings of the word) in that made up data, to claim it’s the “best estimate” doesn’t even pass the laugh test.

Dave
Reply to  Nick Stokes
January 20, 2017 5:30 am

“You can’t exclude them and claim a global average.”
Nick, I take your point that HADCRUT still always uses an interpolation function within the grids that they actually include data for. But they treat grids for which there is no data differently to BEST and GISS etc. in that they leave these out altogether. Many of the missing grids are in land areas with low population – particularly at high latitudes. They don’t claim that their average is truly global. The satellite data confirm that it is these very areas that show the most warming, and that bias probably explains both why CRUTEM shows less warming than the land surface data sets (and also why their trend creeps upwards as new stations do get added from those sparse regions). For me, the telling thing is that even with the missing grids, their data still shows 2016 to be another record.

Udar
Reply to  Nick Stokes
January 20, 2017 6:41 am

You can’t exclude them and claim a global average.
Well, I don’t really see how you can claim global coverage by including made-up data.
I think that the answer to your dilemma of “global coverage” is very simple and pretty obvious.
You just can’t claim global coverage period. Anything else is being dishonest.
I understand your argument about why infilling is good – but you are missing a very big and very important point comparing infilling of one point among many known v.s. infilling many unknown points for very few known.
The problem is essentially undersampling – you are not getting enough data to sample near poles at above Nyquist rate.
In this case you results could be completely wrong – and you have no way of knowing if they are. I am not sure how you can claim that making up aliased data is better than not – as EE with a lot of DSP experience, this is going against everything I’ve learned and done in my 30 years of work.

Jim Gorman
Reply to  Nick Stokes
January 20, 2017 8:44 am

A global average is a made up figure anyway. So if you leave off the poles, does that mean it is a less or more made up average?
Here is my take: 1) leave the poles off, 2) if the remainder of the world is warming, I’ll bet the poles are too, 3) if the rest of the world is cooling, then I’ll bet the poles are too.

tetris
Reply to  Nick Stokes
January 21, 2017 5:52 pm

Nick,
You still don’t get it. If there is no data don’t extrapolate/interpolate or otherwise fabricate it – simply admit there is no reliable data. The problem with that approach is of course that one loses the ability to claim exceptionally high rates of warming.
Even with the sophisticated plug-ins available, interpolating data from a complex picture file in Photoshop will give you a shitty result. And if the scientists in bioengineering I’m associated with were to extrapolate “results” from incomplete data sets and tell me they’re “hard data” they would be fired.
Mind you, that’s the real world with peoples’ lives at stake, not climate “science”…

Bindidon
Reply to  Nick Stokes
January 22, 2017 10:12 am

To all those people having a strange skeptic attitude against infilling of data by interpolation, using e.g. kriging, id would like to say you are all far from reality.
Thousands of engineers working in several different disciplines (e.g. mining, road construction or graphics support using bezier splines) are confronted with the problem of having far less data than needed. And for all these engineers (me included), lacking the ability to interpolate would (have) mean(t) the unability to solve their daily problems.
One of many hints to the stuff:
http://library.wolfram.com/infocenter/Courseware/4907/
*
Moreover, many of you seem to think that we have no valuable temperature data in the polar regions as infilling sources just because they would be so few of them.
This opinion is imho valid solely for the Antarctic. For the Arctic I do not accept it, because we have on land about 250 GHCN stations within 60N-70N, about 50 within 70N-80N, and – yes – only 3 above 80N.
Before you start laughing loud at me and my opinion you might consider stupid, you should first have a tight look at UAH6.0’s 2.5° tropospheric grid data in these Grand North latitudes.
You would discover that the average trend for the satellite era (1979-2016) in the latitude stripe 80N-82.5N is 0.42 °C / decade, and that its ratio to the average trend for the three 80N-82.5N GHCN V3 stations (0.70 °C / decade) is about the same as at many places on Earth.
*
Last not least, let me show you a graph depicting – again in the UAH context – how few data one sometimes needs to have an amazing fit to what was constructed out of much more (I was inspired by a post written by commenter „OR“ alongside another guest post here at WUWT).
UAH’s gridware consists of monthly arrays of 144 x 66 grid cells (there is no data for the latitudes 82.5S-90S and 82.5N-90N). OR had the idea of comparing the trend computed for n evenly distributed, equidistant grid cells with the trend computed for all cells.
Here you see plots for two selections of 16 resp. 512 cells compared with that for all 9,504 cells:
http://fs5.directupload.net/images/170122/fiarnz6h.jpg
It is simply incredible:
– with no more than laughable 16 cells you perfectly reproduce many ENSO peaks and drops;
– with 512 cells (about 5% of the set) you get a linear trend differing from the original by 0.04 °C par decade, and the 5 year running means are amazingly overlapping.

M Courtney
January 19, 2017 12:09 pm

Not sure it’s going to cool.
But it does look more and more as though the current warming is a regional effect around the North Atlantic and the adjacent landmasses.
Global warming it is not.

François
January 19, 2017 12:10 pm

Well, it is cold indeed in Paris these days (-3/4° at night), note that it is not really abnormal. A few years ago, it tended to be a lot worse during the winter. Actually, olive trees bloom in Paris nowadays, they would never have survived fifty years ago. Somebody must have changed the thermometers.

rocketscientist
Reply to  François
January 19, 2017 1:45 pm

Or perhaps the urban landscape of Paris has “changed” to include more release of waste heat from buildings and doorways and Metro vents….
It couldn’t possibly be the most simple and obvious cause….it requires somebody to sneak around the city at night surreptitiously replacing all the thermometers.
Perhaps all the miscreants did was secretly push the glass bulb up a bit higher on scale.

Gloateus Maximus
Reply to  François
January 19, 2017 1:51 pm

Paris was indeed colder 50 years ago, despite over 20 years of increasing CO2. The air was dirtier, too, further adding to the natural cooling cycle.
Warmer temperatures today have little if anything to do with even more plant food in the air. Olives too benefit from more CO2.

MarkW
Reply to  François
January 19, 2017 2:16 pm

50 years ago was near the end of the last cold phase of the PDO/AMO. So it’s hardly surprising that it was colder than.
How much has the population of Paris increased over the last 50 years?
A couple million new people is worth a degree or two all by itself. Not to mention all the heat being consumed by all of the contraptions that didn’t exist 50 years ago.

Brett Keane
Reply to  François
January 19, 2017 2:22 pm

@ François
January 19, 2017 at 12:10 pm: Mature trees are tougher than seedlings

Aphan
Reply to  François
January 19, 2017 3:51 pm

Francois
“A few years ago, it tended to be a lot worse during the winter. Actually, olive trees bloom in Paris nowadays, they would never have survived fifty years ago. Somebody must have changed the thermometers.”
Evidence please. Olive trees can survive up to 15 F (-9.4C) for a limited amount of time. The great frost of 1956 killed off a lot of the olive trees in France (-14C) but that was one year. The average low temperatures in Paris for the past 70 years (that I checked) according to the Paris/Orly Airport records were perfectly fine for olive trees….with the exception that your summers could be warmer and longer for them to really thrive.

Reply to  François
January 19, 2017 4:44 pm

Clearly you think the earth was formed the day you were born.

January 19, 2017 12:10 pm

This is a link to a google Maps kml file for stations north of 66 Lat
https://micro6500blog.files.wordpress.com/2017/01/y_36np_st-kml.doc
This is the correct file name y_36np_st.kml for google maps, looks cool. And these are stations from 1950 and on, as long as they had 1 year with more than 360 days of days, that station is on the map.

Reply to  micro6500
January 19, 2017 12:13 pm

Probably need to download the file and import it into google maps……

Reply to  micro6500
January 19, 2017 12:16 pm

When you see that most of the far north stations are near the coast, meaning it’s got a lot of the ocean influence in the air temps, especially if the ice is melted.

François
Reply to  micro6500
January 19, 2017 12:48 pm

They always were there, weren’t they?

Reply to  François
January 19, 2017 12:58 pm

I believe so, that’s likely where the settlements would be.

Pat Frank
January 19, 2017 12:16 pm

Why is it so hard to figure out that during recovery from a cold period (LIA), there will be a continuing series of years with “record heat.” This is even more obviously true when the measurement record begins just at the end of the cold period.
Dick Lindzen pointed that out years ago, and it’s been studiously ignored by the Justin Gillises of the world ever since. Ignorance of something that obvious can only be willful.
These people are beyond stupid, and well into reckless.

François
Reply to  Pat Frank
January 19, 2017 12:56 pm

The year is 2017, the “little ice age” ended circa 1860, rougly a century and a half ago, do read the papers, stay informed…

Editor
Reply to  François
January 19, 2017 1:15 pm

It took hundreds of years for the warmth of the MWP to decline to the depths of the LIA in the 19thC, known as the coldest period since the ice age.
How long do you think it will take to warm back up again?

Gloateus Maximus
Reply to  François
January 19, 2017 1:20 pm

Paul,
Your point is valid, but the depths of the LIA were in the 17th and early 18th centuries, during the Maunder Minimum.
The Modern Warm Period is still cooler so far than the Medieval WP. We are well within normal limits.

MarkW
Reply to  François
January 19, 2017 2:17 pm

And it’s been warming continuously since then.

NorwegianSceptic
Reply to  François
January 20, 2017 4:28 am

Do you seriously believe that the LIA ended in ONE year?

BCBill
Reply to  Pat Frank
January 19, 2017 1:02 pm

I have wondered if somebody has done an analysis of the rate of record years. With a steadily climbing temperature from the LIA one would expect that record high years would occur reasonably frequently. So has the frequency of record high years gone up???

Reply to  BCBill
January 19, 2017 1:49 pm

BCBill asked:

I have wondered if somebody has done an analysis of the rate of record years. With a steadily climbing temperature from the LIA one would expect that record high years would occur reasonably frequently. So has the frequency of record high years gone up???
Well, over at … http://www.climatecentral.org/gallery/graphics/record-highs-vs-record-lows … , we find this:comment image
… which seems to indicate that the ratio of record highs to record lows has, in fact, gone up. I think this RATIO might be what you are asking about.
Even so, alarmists attributing humans as the cause are perverted wishful thinkers. Humans are merely seeing a shift during their meager life-spans, of an upward trend that started long before human record-keeping of such things began.
Sorry, alarmists, I know you want it to be OUR fault, so that we can have some hope of FIXING it and CONTROLLING it, but we cannot take credit for the cause, cannot take responsibility for the fix, … must face the reality that nature is controlling US, and we better adapt, live life as intelligently as we can, do our good deeds for the RIGHT reasons, and stop all this hysteria over what has been going on long before our little brains (with big egos) came into the picture.

Reply to  BCBill
January 19, 2017 1:52 pm

Crap, crap, crap !! … I messed up my closing tag !! Everything was NOT supposed to be in that block quote, … just BCBill’s question.

Reply to  BCBill
January 19, 2017 3:46 pm

@ R Kernodle…What my eyes see when looking at that graph is the clear sign of warming as the dominant trend up to the mid 1940s. Then a cooling trend sets in and lasts to around 1976/77. Lastly, the next warm trend takes off at the end of the 1970s with the warm temp ratio dominating up until close to the end. That looks cyclical. Note that 2008/09 dropped close to a cool temp ratio. I would expect the next several decades to favor the cool temp ratio, if the pattern is cyclical.

BCBill
Reply to  BCBill
January 19, 2017 4:43 pm

Robert, thanks for getting back to me. That isn’t quite what I was looking for. If you imagine a straight line of temperature rising then each year is a record. Since yearly temperature bounces around, some years are records and some are not. There is much to do about 2015 was a record and 2016 was a record and every year is a record. But my question is more along the lines of are we having more record high years now than we had in the past? Given that the temperature has been rising pretty steadily since the LIA, I imagine there are quite a number of record years out there. Has the frequency of record years increased? I would like to know.

BCBill
Reply to  BCBill
January 19, 2017 5:01 pm

I left this note for Anthony but re-posted it here in case anybody can shed any light. This is just more clarification of what I mean about the frequency of “hottest years ever”. With all the hoopla about the hottest year ever and the three hottest years in a row, or whatever, I have been wondering how often we normally get the hottest year evah! For example, 1854 might have been the hottest year evah, and 1855 might have been the hottest evah also, and then 1856 might not have been. Since the temperature of the earth has been going up more or less constantly since the LIA, there must have been quite a few hottest years evahs. Is there some way to determine this? It doesn’t seem like it has been done as searching for it is turning up nothing. This could be a very nice bit of observational data to release about now. People like to say things like the 10 hottest years recorded happened in the last 15 years (or whatever the actual numbers are), but what if we could say the same thing of the 1930s? Did the 10 hottest years ever up to 1940 happen in the 1930s? If the earths temperature has been constantly rising for a while, it would be fairly normal to have a series of hottest years ever in close proximity.

TA
Reply to  BCBill
January 19, 2017 6:43 pm

BCBill January 19, 2017 at 4:43 pm wrote: “Robert, thanks for getting back to me. That isn’t quite what I was looking for. If you imagine a straight line of temperature rising then each year is a record. Since yearly temperature bounces around, some years are records and some are not. There is much to do about 2015 was a record and 2016 was a record and every year is a record. But my question is more along the lines of are we having more record high years now than we had in the past?”
No, we are not having more record highs now than in the past. Here’s a chart for you (thanks to Eric Simpson):comment image

Reply to  BCBill
January 20, 2017 6:58 am

Forgive my density, BCBill, I’m still not quite clear on what specifically you might be asking for, but just to attempt to shed more light, I was looking at the following graph:comment image
… and what I see is a roughly cyclical progression, from warm to cooler, to warm, where each warm stretch seems to be longer and hotter, … which seems to imply that there IS an overall heat build up that is happening faster.

Reply to  BCBill
January 20, 2017 7:06 am

… and if this IS a cycle, then it looks like we might be teetering on the edge of another drop, and this particular drop could possibly be one of the biggest drops we have seen.
If that big drop DID become apparent over the next few years, then I think that this would drive the final nail into the coffin of the warming claim pushed over the past few decades.
I think that this would remove human activity, at its present or foreseeable level, from the equation. I also think that this would vindicate carbon dioxide, not only as a forcing agent, but as any significant agent at all.

RW
Reply to  Pat Frank
January 19, 2017 11:42 pm

Not to mention a basic fact from sampling statistics which basically promises you ‘record’ measurements the more measurements you make: the probability of measuring extreme values is so low that you need many many measurements to hit that extreme value.

January 19, 2017 12:16 pm

Neither satellite record supports NYT. And the surface GAST anomaly isn’t fit for purpose no matter who produced it: on land, UHI, microsite issues revealed by the surface stations project, and global lack of coverage as discussed in this post. Dodgy SST pre ARGO, hence Karlization. Past cooled and present warmed repeatedly since 2000, provable by simple comparisons over time. And the changes are greater than the previousmor present supposed error bars. NYT propaganda.

Nick Stokes
Reply to  ristvan
January 19, 2017 2:50 pm

ristvan
“Neither satellite record supports NYT”
Plenty of satellite records do. UAH V5.6 says hottest by 0.17°C. RSS 4 TLT version has it hottest by 0.17°C. Their news release starts out:

Analysis of mid to upper tropospheric temperature by Remote Sensing Systems shows record global warmth in 2016 by a large margin

Reply to  Nick Stokes
January 19, 2017 4:19 pm

Soo, refer to some obsolete stuff. Then emphasize RSS whose CTO Mears disavowed it before he changed it. As predicted by Roy Spencer. Not good form, Nick. And you know it.
What about the stunningly sharp now 10 month temp decline since the El Nino 2015-16 peak? Got a reply for that natural variation?

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 7:08 pm

Rud,
A bit of cherry picking there. You like UAH6 because it’s shiny new. And you like RSS V3.3 because, well, because. Even though it has carried a use with care label for much of the year, and their latest report reiterates why RSS think it is wrong.
The fact is that the strength of satellite data is thin. It rests now on one index which is in more or less complete disagreement with the still produced prior version.

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 7:14 pm

As for the “stunning drop” of UAH6, it simply echoes the stunning rise. Here’s a plot:comment image

Bartemis
Reply to  Nick Stokes
January 19, 2017 8:26 pm

Well, yeah, Nick. It’s an El Nino. It goes up, and then it comes down. Why are you investing anything in it for the long term?

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 8:29 pm

Bartemis,
” It goes up, and then it comes down. Why are you investing…”
Yes, that’s my point. I’m not investing, just telling it as it is.

Bartemis
Reply to  Nick Stokes
January 19, 2017 8:34 pm

But, when it comes down, the pause becomes apparent again.

RW
Reply to  Nick Stokes
January 19, 2017 11:52 pm

Nick, are you suggesting the sattelite data are not valid? If they are unreliable then they are not valid. So is that your point? We should assign a low credibility weight to the sattelite data ? If so, can you elaborate?

Nick Stokes
Reply to  Nick Stokes
January 20, 2017 12:35 am

Bart,
“But, when it comes down, the pause becomes apparent again”
The dive in Dec brings it to the pause mean. To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year. About a year around zero should do it. But historically, that isn’t seen. I think that dip won’t last.
RW,
“We should assign a low credibility weight to the sattelite data ?”
If you really need to know the temperature in the troposphere, they are probably the best we have. But beyond that, yes. You need only look at UAH’s release post to see what goes into the sausage. Or listen to Carl Mears, the man behind RSS:
“A similar, but stronger case can be made using surface temperature datasets, which I consider to be more reliable than satellite datasets (they certainly agree with each other better than the various satellite datasets do!).”
But you can quantify this just by looking at the changes to GISS vs UAH, which I do here. You have to down-weight a measure if it says very different things at different times.

Bartemis
Reply to  Nick Stokes
January 20, 2017 5:58 am

“To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year.”
That is why trend lines are not the be-all-end-all of analysis tools. In actual fact, if the temperature goes back to what it was before the El Nino and stays there, then the El Nino blip should not figure in your conclusions.
Trend lines are primarily a method of data compression, not a method of divining truth – it is more compact to provide an offset and slope than it is to present a chart. Your brain can determine patterns that trend analysis does not convey. Don’t let the machinery do your thinking for you.

Reply to  Nick Stokes
January 20, 2017 6:52 am

Bartemis January 20, 2017 at 5:58 am
“To bring the trend down it would have to spend time below that equivalent to the amount it has spent above in the last year.”
That is why trend lines are not the be-all-end-all of analysis tools. In actual fact, if the temperature goes back to what it was before the El Nino and stays there, then the El Nino blip should not figure in your conclusions.

Using that logic you should also eliminate the 1998 El Niño which would make the claim of a ‘pause’ difficult to sustain.

MarkW
Reply to  Nick Stokes
January 20, 2017 7:11 am

To be consistent, you would have to eliminate both El Ninos and La Ninas.
This has been pointed out to you before.

RW
Reply to  Nick Stokes
January 20, 2017 8:47 pm

Nick, thanks for the link. So uah makes adjustments which result in slower (recent) depicted warming.
Considering, though, the apparently high sensitivity (spikes) for the past two super el Nino’s 97/98 and 15/16, I doubt that the satellite measurements are not valid. Compare percent increase from truncated baselines for those two spikes to percent increases in other measures sensitive to the el Nino’s. That would help test validity: is uah and RSS appropriately sensitive to big el Nino’s? If yes, then A+, if no, well then perhaps they are picking up something in addition to temp that is highly correlated with big el Nino’s.

Steve Case
January 19, 2017 12:18 pm

Bullshit = Fake News = Bullshit

astonerii
January 19, 2017 12:22 pm

You would not believe how hot it was where there are no thermometers around the world however!

BCBill
Reply to  astonerii
January 19, 2017 1:04 pm

In Canada we once had a politician who asserted that the unreported crime was sky rocketing. He had a little trouble coming up with the data to support his claim. Surely the unreported temperatures are sky rocketing?

Dave
January 19, 2017 12:24 pm

“2016 was not a record warm year in the USA, 2012 was”
You do realise that the USA is only 1.9 % of the worlds surface area don’t you? I fully agree that there are imperfections in the global temperature data sets. The surface sets are inhomogeneous and in some places poorly sampled either in space or over time and thus require various assumptions – either through extrapolation (e.g. BEST, NOAA, GISS) or simply omitting areas that aren’t covered from present averages (HadCRUT). The satellite sets require large adjustments, cross calibration and careful modeling to account for drift, orbital decay and the limited life span of individual instruments. But there is still a reason why each of these datasets at least attempt (unlike the above statement) to take a global view of climate.

Sheri
Reply to  Dave
January 19, 2017 12:46 pm

“assumptions”—meaning no data available so some is created to fill in. IE, not really data.
The statement clearly refers only to the USA—what part of “IN THE USA” do you not understand? Global was not part of the statement. Contrary to popular belief, everything does not always have to “global”. Except to a few true believers, it seems.

Barbara
Reply to  Sheri
January 19, 2017 2:17 pm

Reminds me of one of my favorite Dilbert strips. Dilbert explains that one of the QA tests is flawed, and asks if he can fake the data; to which the pointy-haired boss replies, “I I didn’t even know data can be real.”

prjindigo
January 19, 2017 12:34 pm

The north pole doesn’t melt…

BCBill
Reply to  prjindigo
January 19, 2017 1:06 pm

The temperatures in the arctic are not yet warm enough to melt candy cane. I believe the north pole is still made of candy cane?

MarkW
Reply to  prjindigo
January 19, 2017 2:20 pm

Of course not, it’s stainless steel.

BCBill
Reply to  MarkW
January 19, 2017 5:04 pm

Ahhh, I guess for all intents and purposes then, the north pole does not melt. Nor can it be licked away.

MarkW
Reply to  MarkW
January 20, 2017 7:11 am

I would not recommend licking. Especially during the winter.

Aphan
January 19, 2017 12:43 pm

Major headache, (minor head injury…slipped on some damn global warming that accumulated on the street) so could someone explain/clarify/validate:
The L-OTI Anomaly with interpolation is 0.82 and the L-OTI Anomaly without is 0.73. That’s a difference of only 0.09.
Can they actually measure to that accuracy?
What is the error margin on these “estimates”?
And if those two numbers (0.73 and 0.82) are the 2016 “average anomaly” both with, and without interpolation, in relation to the average from 1950-1981, then the average anomaly for the past 30 years is 0.020-0.022 per year and 0.20/0.22 per decade…so where the crap does Robert Rohde get a 1.5C “trend”??? Do these people know the difference between temperature trends and temperature anomalies and trends in anomalies vs trends in temperatures?

Mike the Morlock
Reply to  Aphan
January 19, 2017 1:35 pm

Aphan January 19, 2017 at 12:43 pm
My sympathies, mine was the week before Christmas, Sunday morning no snow but very cold wind storm ( n.w AZ.) Did not go at first to emergency , later did they took a bunch of gravel out of my forehead. Hurt for a few weeks. nice scar in the making
michael

Aphan
Reply to  Mike the Morlock
January 19, 2017 3:52 pm

Thank you Mike. No stitches, no gravel, just a lot of blood. And cursing. 🙂

Neillusion
January 19, 2017 12:46 pm

This was also in the article…
… Scientists have calculated that the heat accumulating throughout the Earth because of human emissions is roughly equal to the energy that would be released by 400,000 Hiroshima atomic bombs exploding across the planet every day…
How on earth can these people distort the facts, spin & mislead the reader so blatantly?

Aphan
Reply to  Neillusion
January 19, 2017 1:30 pm
hwat
January 19, 2017 12:47 pm

really appreciate your work done, but you only show data from december. what about every other month of the year or is december the outlier with the “highest overestimation” of the anomaly?
A dataset for every month would better prove your point.
Another question would be, what would the global mean show without interpolation at all?
[try reading better – mod]

tadchem
January 19, 2017 1:00 pm

Once again the Arctic Ocean shows a hot spot over the Gakkel Ridge, south of the Nansen Basin, where in recent history there have been submarine lava flows creating a vast new province of a volcanic trap similar to the Deccan Traps and the Siberian Traps. Once again submarine vulcanism is totally ignored (disregarded?) as a significant source of heat input to the bottom of the Arctic Ocean.

January 19, 2017 1:11 pm

I keep hearing on the news that the last 3 years have each set a new record for “hottest year evah” but that just doesn’t make sense to me. We know that 2016 was warmer due to the el nino but it was only barely hotter than 1998 according to every article I have read so far. Looking at all the charts 2015 and 2014 don’t even seem to come close to setting records above 1998, even with all of the manipulations to cool the past. Am I missing something here?

MarkW
Reply to  jgriggs3
January 19, 2017 2:22 pm

Just like Mann made the medieval warm period disappear, now they have made the 1998 El Nino disappear.

TA
Reply to  jgriggs3
January 19, 2017 6:58 pm

“Am I missing something here?”
You won’t miss anything if you look at a proper chart like the UAH satellite chart which shows 2016 as barely hotter than 1998.
The surface temperature charts have been manipulated to remove 1998 as the hottest year, and to make it appear that it is getting hotter and hotter every year so NOAA/NASA can claim it is the “hottest year ever” each year, like they are again doing this year.
They are actually correct with regard to 2016, but incorrect with regard to any other year between 1998 and 2016. 2016 was one-tenth of a degree hotter than the hottest point in 1998, but 1998 still holds second place (actually a tie with 2016, if you want to get technical).
Anyway, your instincts are correct, 1998 is hotter than every subsequent year but 2016.
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_December_2016_v6.jpg

Eric Simpson
January 19, 2017 1:22 pm

As far as US temperatures, we got a double top from 2006 / 2012, then a declining trend until now. As mentioned the US station coverage is way better than most of the rest of the world, so probably the decline in US temperatures is also a decline that’s happening worldwide, notwithstanding the pure garbage on the Arctic.
Further, the idea the US data should take preeminence is even more true when you look at the 1930s when the rest of the world had *very* sparse station coverage.
In 1999, before radical data manipulations, NASA data made it clear that it was hotter in the USA in the 1930s:comment image

Nick Stokes
January 19, 2017 1:29 pm

So we proved that 2014 wasn’t the hottest year. And then, no, it wasn’t 2015, and now, no it wasn’t 2016. So what was the hottest year?
I guess I’ll be told 1934.

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 2:00 pm

Forrest,
“What makes you think that a single manufactured figure can reasonably be representative of the entire earth’s surface?”
This is the third year that a record is announced, and suddenly all sorts of reasons why this year’s number can’t be believed. The reasons (like station gaps here) would apply any year, but suddenly become pressing when the temperature is up.
But yours is the most comprehensive – we shouldn’t talk about global temperature at all! Yes, that has been popular. There is no way of knowing whether the Earth is warming, so it isn’t a problem. But then, WUWT has for ten years been talking about global temperatures. Grumbling about untidy stations, showing solar effects, predicting cooling. What would WUWT have to talk about if there was no global temperature?

Aphan
Reply to  Nick Stokes
January 19, 2017 4:37 pm

Nick-
“This is the third year that a record is announced, and suddenly all sorts of reasons why this year’s number can’t be believed. The reasons (like station gaps here) would apply any year, but suddenly become pressing when the temperature is up.”
We’ve pretty much talked about the “records” and all the reasons why the numbers can’t be believed for a very long time. (pppppsssttt…you know Anthony has spent many years studying the station gaps and siting issues) But then YOU contradict yourself beautifully in the next statement: “But then, WUWT has for ten years been talking about global temperatures. Grumbling about untidy stations, showing solar effects, predicting cooling.” (so see…..all sorts of things have been pressing here at WUWT)
“What would WUWT have to talk about if there was no global temperature?”
Maybe recipes, or politics, or what Nick Stokes is doing for a job these days, since there’s no global temperature and climate science is an exacting, professional field devoid of conflict and filled with logic and reason?

Nick Stokes
Reply to  Anthony Watts
January 19, 2017 3:26 pm

But we’re told (irrelevantly)
“2016 was not a record warm year in the USA, 2012 was:”
Why can the USA have a record year, but not the globe? Exceptionalism?

John@EF
Reply to  Anthony Watts
January 19, 2017 6:18 pm

Oh yeah, there’s laughin’ goin’ on … more than you think.

Gloateus Maximus
Reply to  Nick Stokes
January 19, 2017 3:11 pm

Nick,
Hardly all of a sudden. Skeptics have said that the antiscience, antihuman works of fantasy by NOAA, GISS, HADCRU and BEST were packs of lies for decades.

Paul Penrose
Reply to  Nick Stokes
January 19, 2017 3:17 pm

Nick,
For the sake of argument, let’s assume that the global temperature anomaly as currently calculated is a valid, useful metric. Why should the average person really care what the hottest year was? What does that, by itself, tell us, other than satisfy idle curiosity? I’m serious. What is the justification about all the hand-wringing over record this and record that when talking about the natural world?

Nick Stokes
Reply to  Paul Penrose
January 19, 2017 3:30 pm

Paul,
Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.

Reply to  Nick Stokes
January 19, 2017 3:56 pm

Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.

Except there’s evidence showing co2 has little affect on minimum temp, so we’re experiencing almost all natural climate.

Pat Frank
Reply to  Paul Penrose
January 19, 2017 4:08 pm

So, Nick, you discount measurement error and credit climate models that have no predictive value. Fake science, brought to you by the folks at climate alarm central.

Aphan
Reply to  Paul Penrose
January 19, 2017 4:48 pm

Nick said:
“Paul,
Individual years don’t mean much. But our future is made up of yars. It’s a reminder of where we are going. 0.1C here, 0.1C there and soon enough you’re talking about real warmth.”
Our past is made up of “yars” too. 4.5 billion yars (according to scientists). And all of the empirical evidence screams that Earth’s repeated patterns of behavior have been…0.1C here, 0.1 C there, and soon enough, an ICE AGE ends (like the one we are currently still living in) and there is a nice, warm, thriving planet for everyone (well…except now days just for people who aren’t stupid enough to build on coastlines that have repeatedly been submerged by this planet). And then the cooling returns. Surely you aren’t an empirical evidence science denier…..???
But lets suppose that humanity CAN generate some “real warmth” with emissions from fossil fuels. That might come in handy if Earth decides it’s time to glaciate again, ya think?

Paul Penrose
Reply to  Paul Penrose
January 19, 2017 4:48 pm

“Individual years don’t mean much.”
On that much we can agree. I would go further, though, and say they don’t really mean anything. Which why all the focus and headlines about “record” heat is just pure PR nonsense.
“It’s a reminder of where we are going.”
No, individual years tell us nothing about where we are going, record or not. It is entirely possible to have a record cold year even during a warming trend, and vice versa.
“[+]0.1C here, [+]0.1C there and soon enough you’re talking about real warmth”
Only assuming that all the other non-record years are neutral or positive. But they aren’t. So even if three or four of the last 10 years were record highs, it still does not mean anything. The others could all offset them for a trend of zero.
So all this n of m years had record high temperatures is just sensationalist nonsense. And every scientist should denounce it as such. I can excuse the media, but not the climate science community that promotes this crap.

Ray in SC
Reply to  Paul Penrose
January 19, 2017 5:09 pm

Nick,
Yes, 0.1C per decade here, 0.1C per decade there. If we follow the current trend we will experience some warmth, maybe 1-1.5C, in a hundred years. Oh my!
Of course, when the 95% confidence level is +/- 5C then it is just as likely we will experience cooling. My bet is the null hypothesis, more of the same cyclical natural variation.

Nick Stokes
Reply to  Nick Stokes
January 19, 2017 3:45 pm

Gloateus,
“were packs of lies for decades”
Actually, they haven’t. It’s become a lot more shrill recently. But what we never see is any attempt by sceptics to calculate an average themselves. Unadjusted data is readily available. It isn’t hard.
Actually, I shouldn’t say never. Back in 2010, Jeff Id and Romanm made a valiant effort. I’ve continued using some of their methods. And then, they just ended up getting the same results as everyone else.
And of course the recent sceptic scientific audit of the indices just disintegrated. Nothing to report.

Richard K.
Reply to  Nick Stokes
January 19, 2017 4:34 pm

Why don’t you satisfy me you’re capable of even solving for the temperature of air by telling me the name of the law of thermodynamics written for solving temperature of atmospheric air.
Face it: your gurus got caught processing fraud. Mann/Jones/Hansen with their ”it’s a whole new form of math” that turns out over and over to be utterly worthless spam designed to steal grant money.
Have you ever worked in gas chemistry in any way, at any time? I think everybody on this board knows the answer to that. You don’t have any school in atmospheric chemistry. You don’t have any school or work in atmospheric radiation or for that matter, radiation physics of any kind.
How do I know that? You continue to claim you think the basic science of AGW is real science.
If you think it’s real, then show us all here, – atmospheric science professionals and amateurs alike – that you’re atmospheric chemistry and radiation physics competent.
Tell us the name of the law of thermodynamics to solve the temperature of a volume of atmospheric air, gas, vapor, etc.
Tell us the equation and tell us what each factor means.
Till you can competently do that you’re another fake on the internet claiming to understand something there’s no way you can,
when you can’t even name the law of thermodynamics governing the field.
And oh yes govern it, it does. It’s what gives the world enforceable legal and scientific certification standards that make the entire modern internal combustion, refrigeration/furnace and many other fields even possible.
I’ll wait, you think up some lie.

Nick Stokes
Reply to  Richard K.
January 19, 2017 11:33 pm

“Why don’t you satisfy me you’re capable of even solving for the temperature of air by telling me the name of the law of thermodynamics written for solving temperature of atmospheric air. “
Because this marks you as a crank, and I’m not interested.

BCBill
Reply to  Nick Stokes
January 19, 2017 5:11 pm

As per my post above, when the earths temperature has been rising more or less steadily for more than a hundred years, warmest years ever are pretty commonplace. Even multiple warmest years ever in a row are probably commonplace. I have taken to measuring the snow depth in my yard as the snow is falling. I get a new record every few seconds. Warmistas are always trying to re-frame the debate from whether or not there is a large human influence in the current warming to there is warming so it must be caused by humans. How many warmest years in a row happened in the 1930’s? Anybody know??

Nick Stokes
Reply to  BCBill
January 19, 2017 6:56 pm

BCBill
“warmest years ever are pretty commonplace.”
They are a lot more commonplace recently. Here is a plot of cumulative records of GISS since 1900. Every time a record is set, the plot changes color and rises to the new level. The current situation is not commonplace.comment image

BCBill
Reply to  BCBill
January 20, 2017 9:27 am

That’s a nice graph Nick and answers the question I have been asking, though I think that the late 19th/ early 20th century and from 1945 to 1976 were periods of slower warming/cooling. I understand they are somewhat atypical in terms of the climb out of the LIA . It would be nice to see a little further back. As many have pointed out, the thirties were a period of warming similar to the present though perhaps not as prolonged. Were there other periods similar to the present?

TA
Reply to  Nick Stokes
January 19, 2017 7:12 pm

“I guess I’ll be told 1934”
That’s what Hansen said. He said the 1930’s was hotter than 1998, and his chart (see Eric’s chart above) shows the 1930’s as 0.5C hotter than 1998, which means the 1930’s was hotter than 2016, too. Which means 1934 was the “Hottest Year Evah!”.
And yes, the U.S. temperature chart represented by Eric’s chart is a good proxy for global temperatures (as good as we will ever have), imo, which is further strengthened by the Climategate dishonesty where the conspirators were concerned about the “GLOBAL” “40’s heat blip”, which they subsequently removed from the temperature records, to make it look like things are much hotter now than then. The Big Lie.
The principal actors in the CAGW false narrative said it was hotter in the 1930’s “GLOBALLY” than it is now. They then went about changing the temperature record to make it conform to the CAGW theory. There is no changing this fact. Alarmists can’t dismiss the 1940’s heat blip as being restricted to the USA.

Nick Stokes
Reply to  TA
January 19, 2017 11:46 pm

“That’s what Hansen said.”
Yup. I predicted it.
“which they subsequently removed from the temperature records, to make it look like things are much hotter now than then. The Big Lie.
The principal actors in the CAGW false narrative said it was hotter in the 1930’s “GLOBALLY” than it is now. “

Absolute nonsense. Below is a GISS plot (history page) showing versions since 1987. 1987 was met stations only, the rest land/ocean. You can see that globally, the 1930’s have never been rated close to present values; in fact, the highest value 1944 is rated higher now than in any previous version. There is no sign that temps were bumped down by 0.15°C.comment image

Tom in Florida
January 19, 2017 1:31 pm

A couple of articles ago the graphs were from MET using the base line 1961-1990. The BEST graphs are using a base line of 1951-1980. The NOAA graphs don’t show the base line period. Why can’t you’all just use the same freakin’ base lines!

January 19, 2017 1:32 pm

‘Hiroshima atomic bombs’
Fear-mongering 101.

tom0mason
January 19, 2017 1:54 pm

From 600 Million Year Geologic Record I note 2 things
1. Even when CO2 levels reached 7000ppm this planet did not experience run-away global warming. In fact at no time did such an event ever happen. Even with CO2 levels at 2 times to nearly 10 times the current levels, ice-age glaciation was not prevented. Showing how merge is CO2 in climate terms.
2. Historically this planet is running very low on atmospheric CO2.
If it drops just a little to 200ppm plants will struggle to survive, endangering all animals. Worse if it falls below 180ppm, as then plant life and all animal life, including humans, stops.
So it may or may not be the ‘hottest year ever’ recently but historically it is nonsense.
If only the 1930s ‘Grapes of Wrath’ years were not excised from the record, the climate choir would have to sing a different song.
It may or may not be the ‘hottest year ever’ but CO2 levels are obviously not the climate driver.
It may or may not be the ‘hottest year ever’ but that relies on your personal beliefs in the probity or otherwise of those making the claim.

TA
Reply to  tom0mason
January 19, 2017 7:22 pm

“If only the 1930s ‘Grapes of Wrath’ years were not excised from the record, the climate choir would have to sing a different song.”
That’s why it was excised. They didn’t want to sing that different song.

EthicallyCivil
January 19, 2017 2:03 pm

Interpolation across a pole… has to be on a Top Ten List for “Ways To Flunk a Numerical Methods” Sr. project. Sigh.

Nick Stokes
Reply to  EthicallyCivil
January 19, 2017 3:51 pm

Just a simple pole? Easy.

January 19, 2017 2:23 pm

Both GHCN version 4 and Berkeley Earth have pretty decent arctic coverage, especially in recent years. That said, there are still no stations directly in the arctic ocean, so some interpolation is needed. But we know from remote sensing products (AVHRR and MSU) that the arctic has been freakishly warm during the last three months, so its a pretty safe bet that interpolated products are more accurate than leaving that area out (which implicitly assigns it the global mean temperature in the resulting global temperature estimate).
Via Nick Stokes GHCNv4 station location plotter:comment image
Via reanalysis (using satellite data):comment image

MarkW
Reply to  Zeke Hausfather
January 19, 2017 3:02 pm

Made up data is better than no data?
Is that really the story you want to go with?

Paul Penrose
Reply to  Zeke Hausfather
January 19, 2017 3:21 pm

Zeke,
Can you please quantify “freakishly warm” for us? And then please explain to us why three months of this “warmth” means anything in terms of climate?

Michael Jankowski
Reply to  Zeke Hausfather
January 19, 2017 4:18 pm

It’s one thing to interpolate between two (or more) surrounding areas that are hotter and colder to estimate a local temperature. It’s another to extrapolate data and guess and the temperatures in the warmest areas.

Bill Illis
Reply to  Zeke Hausfather
January 19, 2017 4:20 pm

The two most northerly stations are Eureka Canada (84N) and Svalbard (78N).
They both had very warm years about 6C above normal in 2016 (yes I checked). Probably a fluke more than anything else but they also have very variable year-by-year records, just like every station. +/- 6.0C is not that unusual for these two stations.
BUT, this does not mean the entire Arctic Ocean was 6C above normal in 2016. If that was the case, ALL of the sea ice would have melted out this summer. At best, the Arctic Ocean was 1.0C above normal, probably just 0.5C.
This extrapolation technique across the polar oceans is completely BS. That means GISS and Cowtan and Way and Zeke as well.
There are physical signs that have to be evident to show any ocean area being so far above normal.
THEREFORE, because what I just wrote is actually factually and physically true, we should throw out ALL of these extrapolations across the Arctic Ocean and force people like Zeke above to be honest.

Geoffrey Preece
January 19, 2017 2:38 pm

This article reminds me of a Monty Python scene, crowd yells out “we are all individuals”, one person yells “I’m not”. Why do we have an article that makes a point of saying the USA temperatures are different to world temperatures, that could be done in any country, it does not mean anything when talking about average world temperatures.

Reply to  Geoffrey Preece
January 20, 2017 2:13 am

Nick Stokes January 19, 2017 at 10:44 pm
Thank you. There are too many responses I could make, so I’ll try just one. If you include ‘extrapolation’ to mean projecting and comparing sea temperature and air temperature, then the sea profile is central to the argument. Ask yourself, ‘What is the proper part of the sea profile to sample for T to compare with the air?’. The surface microlayer is in contact. Should it be chosen? The top 500m can mix and contact, can it be the one? Can we simply use whatever slice the Argo float happened to be at? Not on your Nellie, because the within-sample profile variation can be large compared to the effect being sought. Papers that choose among marine data sets to adjust for T bias and be pausebusters are clearly wrong because of this lack of being able to define and measure which part of the natural sea T profile is to be used to compare with air T.
And both sea and air are in dynamic T states at any point on various time scales from minutes to days or more. My reading is incomplete, but my gut feel was that it is breaking new ground to try to use geostatistics or generally interpolation/extrapolation like this on dynamic sample data. It might be possible if we have detailed knowledge of the time dependency of the dynamics, but here at sea we clearly do not.
Geoff.

Nick Stokes
January 19, 2017 2:44 pm

“What a difference that interpolation makes.
So you can see that much of the claims of “global record heat” hinge on interpolating the Arctic temperature data where there is none.”

Data is always being interpolated where there is none. It goes with any kind of continuum science. You can’t measure everywhere, you can only sample. Most people don’t have an AWS on the premises. But they still find weather reports useful. They interpolate from the Met network.
So what always counts is how far you can interpolate reliably. That is a quantitative matter, and scientists study it. Hansen many years ago established that 1200 km was reasonable. It’s no use just saying, look, there are grey spots on the map. If you don’t think interpolation is reasonable, you need to deal with his argument.
And there are checks. The Arctic has a network of drifting buoys, so it isn’t so unknown. Here is a map from a couple of years ago:comment image

Reply to  Nick Stokes
January 19, 2017 2:59 pm

If I recall correctly, Cowtan and Way used the drifting buoy data as an out-of-sample evaluation of their interpolation, and found that it matched up pretty well.

Reply to  Zeke Hausfather
January 19, 2017 3:37 pm

If I recall correctly, Cowtan and Way used the drifting buoy data as an out-of-sample evaluation of their interpolation, and found that it matched up pretty well.

And how do they turn completely different types of data, one with only a general vague location into data you can compare to a fraction of a degree?
I’m not sure what bothers me more, warmists thinking I should believe this, or me wondering if they really believe it.

Nick Stokes
Reply to  Zeke Hausfather
January 19, 2017 3:50 pm

“And how do they turn completely different types of data, one with only a general vague location”
Why do you say that? As the map indicates, they know where the buoys are, I would expect to the nearest few meters at any time. And they will be taking air temperature, probably 1.5 m above surface.

Reply to  Nick Stokes
January 19, 2017 6:26 pm

Why do you say that?

I was thinking they blended surface data with satellite data, but I was thinking they infilled the Arctic with satellite and as I was starting to type in realized that wasn’t correct, but likely the other way around.
Then the only other concern is how the in band data was processed onto the average mean field. But it’s likely just taking the average of the mean buoy air temp is not comparing like to like. With all the homogenizing and infilling and all.

Reply to  Zeke Hausfather
January 19, 2017 5:50 pm

Nick,
Please do not use bad science to impugn geostatistics.
Yes, extrapolation from one point to another and interpolation between points are common methods in geostatistics and other methods.
However, those sample points have conditions precedent before they can be used properly.
In work familiar to me, and now talking only geostatistics, one does not interpolate between different media, as from sea to air. Or ice to adjacent water. Boundaries matter.
Further, there has to be some knowledge of the properties assumed for or known about the points in a given medium. In rock work it is common to process different major rock types separately because they can have different fabrics with different alignments, leading to different ‘solids of search’ for later weighting and other complications.
Now taking a vertical sea profile containing a buoy, do we have the equivalent of different rock types through the profile? Yes we do, especially in fine detail. The very surface of still water has sub mm layers impacted by long wave IR, different to lower down, being evaporated, special effects on T. The top 500 mm or so of sunlit water is often at higher T, by a deg C or more, than lower down. Proceeding down, you can meet thermoclines and ipsoclines before 100m down, the depth used by some to express overall surface sea temperatures. Therefore, such SST are by definition an average for some sort of T whose variation in that profile is large compared with the effect often sought, namely the T difference between one profile site and another. Even day/ night sea cases are different.
It is mathematically wrong to use geostatistics when the within-sample static variation is much larger than between sample, let alone including dynamics on time scales of making a measurement. Yet, that is being done. Mixing by Nature can make results seem better, but they are not actually better unless the pre-mixing T distribution is known in detail so that the appropriate sub- sample can be compared site to site, apples to apples, later in the process. What part of a variable sea T profile should be compared to air T above? How do you know if you have captured it? Given the size of T variation down a profile, this is a fundamental impediment. Sure, you can grope around and get some general figures but these will not usually be good enough even for government work. It is stuff to kid yourself with.
A further problem happens when air T is compared with sea T. Their thermal inertias differ. Some heating or cooling effects work to different time patterns. You cannot interpolate between air and sea because of this, except with huge assumption errors.
For reasons like these, the Karl pause buster paper is invalid. The Cowtan & Way fiddles in the Arctic are wrong at Kindergarten level and should be retracted before doing more harm. Rohde from BEST might like to address some of these points to justify his recent revisionist work about the hottest evah. I had hoped he would have done better. Others like satellite T people should refrain from overextension of ideas linking air T to sea T. Again, it is invalid unless given a huge and correct error from non- physical assumptions.
Geoff

Reply to  Zeke Hausfather
January 19, 2017 10:44 pm

Geoff,
“Now taking a vertical sea profile containing a buoy, do we have the equivalent of different rock types through the profile?”
I’m not impugning geostats; it’s a fine subject. My late colleague Geoff Laslett also spent time at Fontainebleau and Grenoble. We worked together at Geomechanics. But your argument here is way off beam. Interpolation is not used here to look at microlayers in the sea. It is used in surface averaging. And there is not a lot of inhomogeneity across the surface. There is the land/sea interface, for which a land mask is usually used. And ice is a nuisance.
You have more complications in rock (not all of which you know in advance). But in the end it’s the same deal. You infer the properties of a continuum from samples, using geometry.
Karl’s paper has nothing to do with details of interpolation; it is just calibrating the instruments. Cowtan and Way is fine; it basically shows that rational interpolation is far better than just “leaving empty cells out”, which assigns the hemisphere average to them.

D. Turner
Reply to  Nick Stokes
January 19, 2017 4:16 pm

Hansen is a criminal and a fraud and the fact you even reference him as scientific proves the kind of degenerate you are.

scraft1
Reply to  D. Turner
January 20, 2017 6:41 am

“Hansen is a criminal and a fraud and the fact you even reference him as scientific proves the kind of degenerate you are.”
That’s a bit strong, isn’t it D. Turner? Criminal, degenerate, fraud?
Geez, get a hold on yourself.

Michael Jankowski
Reply to  Nick Stokes
January 19, 2017 4:23 pm

It doesn’t look like interpolation on the color charts. If it is, it certainly isn’t linear. It certainly needs to be justified.
There’s nothing wrong with making an estimate where there are no measurements. But a record should be made of measurements and not estimates.

Jim Gorman
Reply to  Michael Jankowski
January 20, 2017 9:44 am

Estimates and interpolations and imputations should not be included as “data” in a data set at all. If a someone wants to use their own “estimates” to prepare a study then they should include the methods they used to obtain the estimates and the study should directly explain it uses the authors estimates, not measurements.
If someone wants to use a “government” data set that has adjusted data, they should have to include a disclaimer that some of the data is estimated and how.
Every time I see this I think of the old adage, “I’m here from the government and I’m here to help!”

u.k.(us)
Reply to  Nick Stokes
January 19, 2017 5:09 pm

“There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”
― Mark Twain, Life on the Mississippi

Barbara Hamrick
Reply to  Nick Stokes
January 19, 2017 6:03 pm

Nick said, “Hansen many years ago established that 1200 km was reasonable.”
That seems like an awfully large distance. I know anecdotes are not scientific evidence, but I live in what’s commonly referred to as the “Inland Valley” in southern California. My nearest beach (Newport/Costa Mesa) is about 60 km away. There are times when it is 90 F here and 75 F there (wait, I’m not done), and there are times when it is 90 F here and 88 F there; rarely is it true that Costa Mesa would be warmer than Pomona in the summer (though the reverse is mostly true in the winter, because Costa Mesa’s proximity to the ocean moderates the temperature there), but it seems highly unlikely to me that anomalies in Pomona would be representative of anomalies in Costa Mesa. Costa Mesa’s temperatures see a much lower variance that Pomona temperatures, and the direction of change may correlated or anti-correlated depending on (e.g., wind patterns or cloud cover).
I guess, now that I’m thinking about it, the 1200 km margin for interpolation is highly problematic if one is trying to interpolate inland from a coast (because of the moderating effect of large bodies of water on temperature changes, which ultimately also impacts the magnitude of the variance in temperature day-to-night, day-to-day, and even year-to-year), and that would be worse yet if one is interpolating inland from two coasts (or a surrounding coastal region into the interior of a large island, or polar region). I guess what I’m thinking is, looking at the coverage in Greenland (for example), it is not at all unreasonable that the coastal regions might slightly warm year-to-year for many years while the interior was doing something entirely different (in any given year) because of the much higher variability and higher magnitude of response to changes in other relevant variables.
So, all that said, I don’t see how periphery anomalies can be used to interpolate inward (and, into higher latitudes) to the North Pole in a reliable manner.

Nick Stokes
Reply to  Barbara Hamrick
January 20, 2017 12:19 am

Barbara,
“I guess, now that I’m thinking about it, the 1200 km margin for interpolation is highly problematic”
A lot of people think that. But you really need to quantify it. Hansen did that in his early days, and it has survived pretty well. You need to quantify just how much spatial correlation there is, and then the cost of whatever shortfall in terms of the uncertainty of what you are calculating (eg global average).
I have a page here where you can see visualised anomalies for each month of land with GHCN V3 and ocean with ERSST4. This is the basic combination that GISS uses. The style of plot is that the color is exact for each measuring station, and linearly shaded within triangles connecting them. You can click to show the stations and mesh. I’ll show below a snapshot of Eurasia in Dec 2016. You can get the color scale from the page – it doesn’t matter here. left and right are basically the same scene, with right showing the mesh. The important thing is that the anomalies on the left are fairly smooth over long distances. Not perfectly, and the errors will contribute noise. But if you think of taking any one node value out and replacing it with an average of neighbors, the result wouldn’t be bad. Much better than replacing with global average.comment image
You asked about the Arctic. I wouldn’t recommend my gadget there; it’s treatment of the sea/ice boundary is primitive. But interpolation is in principle no different, and they do have buoys to check with.

Taxed to Death
January 19, 2017 2:47 pm

“Data is like a whore, it will do anything you want for money.” – anonymous

MarkW
Reply to  Taxed to Death
January 19, 2017 3:04 pm

If you torture the data long enough, it will tell you anything you want to know – anonymous

taxed
January 19, 2017 3:08 pm

lts rather interesting that after the “hottest year ever” that the current snow cover extent in the NH is running above average.

M Seward
January 19, 2017 3:16 pm

I just lurve the way they use orange – red – dry blood red in the ‘temperature’ colour scheme when they are talking about an ‘anomaly’ or a trend/decade value.. This is marketing 101 psychodramartisation of information. Its the visualisation of the word ‘DEADLY’ in its intent, to invoke fear and loathing. Its how the shamans have worked over the human mind for millenia.
This corrupt, melodramatic, propaganda schlok belongs down the toilet of marketing history with cigarette advertising and the like.
The Bureau of Meteorology and the TV networks use the same buulshit device on weather maps in Oz. 30˚C is pretty warm summer’s day down under but 40˚C is genuinely hot. Red and dry blood red kick in from 25 or 30˚C as iff thousands will collapse and die doing their Saturday shopping or while sitting in a cafe.

Hoyt Clagwell
Reply to  M Seward
January 19, 2017 3:31 pm

Since they are so fond of averaging all of Earth’s temperatures into a single number to scare us, I think they should average all of those colors across the globe into a single “average” color and paint the whole map with it and then tell us why that’s so bad.

Pamela Gray
January 19, 2017 4:00 pm

Nick, I have no problems with Arctic warming. It likely has probably done that at every interstadial peak. And CO2 has peaked along with it. The current pattern has been repeated several times the past 800,000 years.

Reply to  Pamela Gray
January 19, 2017 4:22 pm

PG, essay Northwest Passage argues it does that with a sine wave of 65 years or so. Qualitative, but backed up by Akasofu and extensive Russian records, some now translated into English.

Toneb
Reply to  Pamela Gray
January 20, 2017 5:08 am

Not at 400ppm it hasn’t.
And it it were not for that we would likely still be cooling from the HCO.comment image

gnomish
January 19, 2017 4:04 pm

Forrest Gardener nailed it.
global average temperature is about as useful as global average telephone number.
separate the locations – and further, separate the TOB.
chart the like with the like unless you want to make mud.

Michael Jankowski
January 19, 2017 4:26 pm

Mosh was arguing not too long ago that increasing CO2 would continue to cool Antarctica for decades…but BEST says it’s been flat or warming since 1970. Funny.

Reply to  Michael Jankowski
January 19, 2017 4:53 pm

BEST Antarctica merely illustrates how messed up their methodology potentially is. Their regional expectations QC model excluded 26? months of record cold at Amundsen-Scott, the south pole, and arguably the best tended station on Earth. Certainly the most expensive. They did that based on their own constructed regional expectations. The nearest comparison continuous station is McMurdo, several thousand meters lower and about 1200 Km away on the coast. See fn 26 to essay When Data Isn’t for details.

J. Smith
Reply to  ristvan
January 19, 2017 6:02 pm

BEST is another group grope by the usual suspects: self appointed climate ‘experts’ who haven’t ever worked with gases and vapors,
much less actual atmospheres,
in their lives.
Every one of these so called ‘climate’ fakes is as transparent as asking them the name of the law of thermodynamics that governs the atmosphere.

Roger Knights
January 19, 2017 5:05 pm

The NY Times: The piper of record.

Thomas Graney
January 19, 2017 5:48 pm

It isn’t necessary to deny and ridicule everything in order to be a skeptic. Until someone can, in a professional and scientific way, falsify all of the data which shows warming, we’ve got some warming. I don’t believe the models and I think there is likely some confirmation bias in data collection and analysis, but where is the data to the contrary except that posted by cranks. I’m not sure why Nick and Zeke give you all as much time as they do.

Pat Frank
Reply to  Thomas Graney
January 19, 2017 6:30 pm

Thomas Graney, show me where the crankiness is in this paper (1 MB pdf), or this one.
Or, for that matter, this one.
They all show that neglected systematic measurement error makes the surface air temperature record unreliable. And the systematic error analysis is based on published sensor calibration studies, such as:
K.G. Hubbard and X. Lin, (2002) Realtime data filtering models for air temperature measurements Geophys. Res. Lett. 29(10), 1425; and,
X. Lin, K.G. Hubbard and C.B. Baker (2005) Surface Air Temperature Records Biased by Snow-Covered Surface Int. J. Climatol. 25 1223-1236.
Papers like these, involving thousands of temperature calibration measurements, are the direct foundations for the estimates of air temperature error that bring forth the dismissive sneers from Steven Mosher, Nick Stokes and, apparently, you.

Reply to  Pat Frank
January 19, 2017 7:59 pm

What this means is that if you use a broken ruler to measure the growth of a tree,

Suppose though you can get a measure of the day to day change to the resolution of the minimum scale on that broken piece, and you are really only interested in how much it changes. Does the accurate height really matter alot then?

Reply to  Pat Frank
January 19, 2017 8:32 pm

That would be solved by a 30 year running average of the annual average of day to day change, since for a full year, temp should average to 0.0 if there was no annual change. And while a single event doesn’t affect a 30 year average, a repeating pattern, if it changes will.

Pat Frank
Reply to  Pat Frank
January 19, 2017 9:01 pm

The uncertainty in an anomaly is increased over an individual measurement.
This is because the uncertainty in the difference between a measurement and a mean is u_a = sqrt{sum of [(e)^2 + (u_m)^2]}, where u_a is the uncertainty in the anomaly, “e” is the systematic error in a given measurement and “u_m” is the uncertainty in the mean.
The uncertainty in the mean, u_m, is sqrt{[sum over (N systematic errors)^2]/(N-1)}, where “N” is the number of values entering the mean.
u_a is always larger than e.

Reply to  Pat Frank
January 20, 2017 5:52 am

This is because the uncertainty in the difference between a measurement and a mean

while that is what some do, I compared two measurements for the anomaly, and they are correlated, so strings I can divided the one error term in half.
I’m interested in how each stations temperature evolves over time, I have no interest at this point comparing to some made up global average with a large uncertainty range.
And I found an equation for uncertainty that looks a lot like this one, (I have to check) but if it is, it’s already being calculated, and they are all 10^-5, 10^-6 couple orders of magnitude smaller than my calculations. Very uneventful. I have been looking for someone who can make sure I’m doing it right, so I’ve been saving your posts.

Nick Stokes
Reply to  Pat Frank
January 19, 2017 11:54 pm

“annual average of day to day change”
That average is just (diff last-first day)/365. Not useful.

Reply to  Nick Stokes
January 20, 2017 12:41 am

That average is just (diff last-first day)/365. Not useful.

Maybe, if that was the only thing I used that data for, but it’s not.

Reply to  Nick Stokes
January 20, 2017 12:49 am

And of course, that would also remove all of the lumpiness from temps throughout the year.

Reply to  Nick Stokes
January 20, 2017 5:39 am

The lumpy bits that get thrown awaycomment image
Each slope is the average of a large number of stations (I think this is either US only or Global), and there is a nice slope between peaks that can be used, they are from a known amount of solar applied that is varying right along with temperature.

Pat Frank
Reply to  Pat Frank
January 20, 2017 4:12 pm

micro6500, get yourself a copy of Bevington and Robinson “Data Reduction and Error Analysis for the Physical Sciences.” If you google the title you may find a free download site.
That book will tell you what you need to know. Unfortunately, it doesn’t say much about systematic error. Few error discussions do, and most of those treat it as a constant offset with a normal distribution.
When systematic error is due to uncontrolled environmental variables, it’s not constant and cannot be treated as normally distributed. The only way to detect it is to do calibration experiments under the same measurement conditions. Data contaminated with systematic error can look and behave just like good data.
The only way to deal with it, if it cannot be eliminated from the system, is to report the data with an uncertainty qualifier. All the land surface temperature measurements, except for those measured using the aspirated sensors in the Climate Research Network, are surely contaminated with considerable systematic error; all of which is ignored by the workers in the field.

Reply to  Pat Frank
January 20, 2017 5:13 pm

The only way to detect it is to do calibration experiments under the same measurement conditions.

Assuming true, we don’t have it. There are logs about station moves and care at some according to Steve, but that isn’t calibrating stations.
What I tried to do is exploit the data I had, and not just repeat the same process the others have used, I think we’ve seen if you do the same basic things, you’ll get the same basic results.
I take the philosophy that what I do does remove some of possible types of error, and I believe gives me better uncertainty numbers, and fails on the same errors that no one fixes.
I’ll look for that book.

Pat Frank
Reply to  Pat Frank
January 20, 2017 5:36 pm

Rob Bradley, systematic error in the air temperature measurements is not my assumption at all. It has been demonstrated in published calibration experiments.
For example: K.G. Hubbard and X. Lin (2002) Realtime data filtering models for air temperature
measurements
Geophys. Res. Lett. 29 (10), 1425 and X. Lin, K.G. Hubbard and C.B. Baker (2005) Surface Air Temperature Records Biased by Snow-Covered Surface Int. J. Climatol. 25, 1223-1236.
Those do not exhaust the published surface station sensor calibrations. They all show non-normal systematic temperature measurement error.
SST calibrations are more sparse, but those that exist also show systematic errors. For example, J.F.T. Saur (1963) A Study of the Quality of Sea Water Temperatures Reported in Logs of Ships’ Weather Observations J. Appl. Meteorol. 2(3), 417-425.
The errors are present, they are large, they do not average away, and they make the historical surface air temperature record useless to establish the trend or rate of temperature increase since 1900.

Reply to  Pat Frank
January 20, 2017 6:05 pm

The errors are present, they are large, they do not average away, and they make the historical surface air temperature record useless to establish the trend or rate of temperature increase since 1900.

I don’t agree. They might not be suitable as they are used. But there is useful information to be gleaned from the records.
The problem is the only they you’ve gotten is a sketchy anomaly based on a lot of stations that don’t exist. If you’re at all interested follow my name, and in the oldest page there at the top is a link to sourceforge.net all of the area reports and code at there. The charts are just a fraction of what’s available. I can build far more reports that need examined than I can do.

Reply to  Pat Frank
January 20, 2017 6:07 pm

At the top of this one is the SF link
http://wp.me/p5VgHU-13

Pat Frank
Reply to  Pat Frank
January 20, 2017 7:02 pm

micro6500, “I don’t agree.
I cited some published calibration experiments in the reply to Rob Bradley. You can ignore them. You can pass them off. They won’t disappear.
Neglect error, play a pretence. That’s the law in science.
They might not be suitable as they are used. But there is useful information to be gleaned from the records.
Only if you’re interested in temperature changes greater than ±1 degree C. And that’s being generous.

Reply to  Pat Frank
January 20, 2017 7:38 pm

Well, it’s a good thing I’m not using the data like that then isn’t it?

Pat Frank
Reply to  Pat Frank
January 21, 2017 12:43 am

Richard Baguley, that 1963 study you disdain was the most extensive investigation, ever, of the accuracy of SST measurements from engine intakes. Does data become invalid because it was measured years ago? Is that how your science works? Do you disdain all air temperatures measured before 1963, too?
Sauer’s study reveals the error in temperatures obtained from ship engine intake thermometers, that make up the bulk of SST measurements between about 1930 and 1980. The error seriously impacts the reliability of the surface temperature record since 1900, which is what interests us here.
As to Argo errors, see, for example, R. E. Hadfield, et al., (2007) On the accuracy of North Atlantic temperature and heat storage fields from Argo JGR 112, C01009. They deployed a CTD to provide the temperature reference standard.
From the abstract, “A hydrographic section across 36 degrees N is used to assess uncertainty in Argo-based estimates of the temperature field. The root-mean-square (RMS) difference in the Argo-based temperature field relative to the section measurements is about ±0.6 C. The RMS difference is smaller, less than ±0.4 C, in the eastern basin and larger, up to ±2.0 C, toward the western boundary.

Barbara Hamrick
Reply to  Thomas Graney
January 19, 2017 7:08 pm

Thomas,
Even accepting there is as much warming as claimed by those intolerant of any skepticism, I do not find any evidence that it is human-caused. CO2 is rising and global average temperature appears to be rising, but correlation is not evidence of causation.
The proper scientific course is for those that hypothesize we are experiencing runaway or dangerous warming (due to increased concentrations of CO2 that result in an amplication of warming by inadequately known and potentially completely unknown feedback mechanisms) to show that the warming to date is not consistent with natural causes. I.e., those proposing the hypothesis of runaway warming are actually the ones that have an obligation to demonstrate that any observed warming is not consistent with natural causes. I have seen no evidence they have ever made an attempt to do that.

Justanelectrician
Reply to  Barbara Hamrick
January 19, 2017 7:24 pm

“if you use a broken ruler to measure the growth of a tree, your measurement of the height of that tree might be wrong, but clearly you’ll know that the tree is growing.”
What if you measure less than half of the tree with your broken ruler, and then guess (excuse me – extrapolate) the rest? Do you clearly know that the tree is growing then?

Justanelectrician
Reply to  Barbara Hamrick
January 19, 2017 8:23 pm

FWIW, I used extrapolate instead of interpolate because I was thinking of the arctic (since that is where the bulk of the warming shows up). I think interpolate is correct word for what happens in the antarctic because there’s a station at the south pole , so they are actually infilling between two knowns, but in the arctic, they are guessing what lies beyond the northernmost stations, which would be extrapolating. Probably doesn’t mean much, but to me interpolate sounds more accurate because your error is somewhat bound by the known on each side, while the errors are almost unlimited when extrapolating.
Purely hypothetical example of what can happen when extrapolating: suppose someone was to take the temperatures during a twenty year recovery from an extreme cooling period (imagine fears of a looming ice age), and then extrapolate that recovery period trend indefinitely into the future (I know no one would actually do that – I said it was hypothetical). Why the projections would be ridiculous, and would serve as a warning to would be extrapolaters for decades.