A Return to the Question "Was 2014 the warmest year?"

Guest Post by Bob Tisdale

UPDATE: The author of the post has now been listed at the end of the Initial Notes.

# # #

This is a repost of a blog post written by a well-known and well-respected climate scientist. To date, it is one of the best answers I have come across to the often-asked question, “Was 2014 the warmest year?” What sets it apart from most articles is its down-to-Earth discussion of probabilities.

INITIAL NOTES:

  1. This is not a discussion of why 2014 might be warmest. For that, you’ll need to refer to the blog post here.
  2. The data discussed in the following post is the old version of the NCDC data, not the newly revised NCEI data introduced with Karl et al. (2015).
  3. The topic of discussion is surface temperature data, not lower troposphere data.
  4. This is not a discussion of adjustments to surface temperature data. It is also not a discussion of the slowdown in global surface warming.
  5. The basis of the discussion is: given the surface temperature data we had in hand at the end of January 2015, could we say that 2014 was the warmest year?

I would like the content of the post to be the topic of discussion on the thread, not the author. If you know who the author is, or have taken the time to search for the blog in which the following post appears, please do not identify the author by name. Later in the day, I will provide an update with a link to the original post and let you know who the author is.

UPDATE

The author of the blog post in John Kennedy of the UK Met Office. He blogs occasionally at DiagramMonkey. The original of the post was published on January 31st.

[End preface. The repost follows.]

The question of whether 2014 was or wasn’t the warmest year has recently exercised the minds of many. The answer, of course, is…

No.

At some point in the past, the Earth was a glob of molten rock pummelled by other rocks travelling at the kind of speeds that made Einstein famous, dinosaurs late and a very, very, very loud bang. There have also been periods, more hospitable to life (of various kinds), where global temperatures were in excess of what they are today.

However, if we narrow the scope of our question to the more conventional and cosmically brief period covered by our instrumental temperature record – roughly 1850 to now – the short answer is…

Maybe.

This has been an answer to a Frequently Asked Questions on the Met Office website (http://www.metoffice.gov.uk/hadobs/hadcrut4/faq.html) and has been the source of occasional ridicule.

That’s fine.

Obviously, one year was the warmest1. In other words, according to some particular definition, the global average of the temperature of the air near the surface of the Earth in 2014 or some other calendar year was higher than in any other. Unfortunately, we don’t know what that number is for any particular year. We have to estimate it1.5 from, sparse and, occasionally unreliable measurements. Some of them made with the help of a bucket.

That gap, the gap between the estimated value and the unmeasurable, might-as-well-be-mythical, actual global temperature is the reason for the “Maybe”. This is a common problem familiar to anyone who has attempted to measure anything2. If you are unfamiliar with it, ask a room full of people what time it is. You’ll get a range of answers3. These answers will be clustered close to the actual time, but not exactly on it. Most people are used to living in this chronological fog of doubt. They allow for the fact that watches and reality never line up precisely.

For global temperature (or any other measurement for that matter) we don’t know exactly how large that gap is, but we can by diverse methods get a reasonable handle on what kind of range it might fall within. Most people’s watches are within five minutes either side of the “right time”. Or, to put it another way, the right time is usually within five minutes either side of what most people’s watches say. That range is the uncertainty.

The good news is that, armed with this uncertainty information for global average temperatures, there are some years, for which the answer to the question “Well, what about this year, could this year be the warmest?” is, resoundingly, undoubtedly, 100%: No.

Non. Nein. Niet. Nopety, nopety, noooooo.

The number of years in the global temperature record which definitely aren’t the warmest is quite large. I would go so far as to say, it’s most of them. Here, for your enjoyment, is a list of definitely-not-the-warmest years.

1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1999 and 2000.

Out of a record, which currently runs to 165 years, 149 years definitely aren’t the warmest. To this we can add a few additional years that are distinctly unlikely to be the warmest.

1997, 2001, 2004, 2008, 2011 and 2012.

And, while we’re at it…

1998, 2002, 2003, 2005, 2006, 2007, 2009, 2010, 2013 and 2014.

Pick any one of those years and, more likely than not, it won’t be the warmest year either. Careful readers will have noticed that there is not a single year in all of those 165 years that is unaccounted for; the vast majority of years definitely aren’t the warmest, but even in the small remainder there is no year that is more likely to be the warmest year than not.

We really should have stuck with “maybe” because this is going to take a while to unpick.

Seriously, folks, consider maybe.

No? OK. This is on you.

According to a very good global temperature data set, 2014 was estimated to be 0.56° above the long term average. The uncertainty on that estimate is about 0.10°. In other words, according to that data set there’s about a 95% chance that the true global temperature will be between 0.46° and 0.66°. Likewise, we can consider 2010, with an estimated global temperature of 0.53°C and an uncertainty, again, of about 0.10°C. If these were the only two years and this was all we knew, we could calculate the probability that 2014 was warmer than 2010. It’s about 69%. We can also compare 2014 to 2005 (0.56 vs 0.52). In this case 2014 is about 75% likely to be warmer than 2005.

However, to work out the probability that 2014 is the warmest year on record, we have to compare it to all the other years at the same time. This is a slightly more involved calculation, so we’ll build up to it. First by asking what’s the probability that 2014 is warmer than both 2010 and 2005.

We’re going to do this using a Monte Carlo method. We’ll take the best estimates for 2014, 2010 and 2005 and use the uncertainties to generate possible “guesses” of what the real world might have done4. We’re going to do that thousands of times and count how often 2014 comes out on top.

The probability that 2014 is warmer than both 2010 and 2005 is about 60%, less than the probability that 2014 is warmer than either one or the other separately. If we add 1998 into the mix, then the probability drops even further, to 56%. The more years we add the lower that probability goes. Why does that happen? Simply, each year gets a crack at being warmer than 2014. The more years there are, the higher the chance that just one of them will be warmer. And one year is all it takes.

However, this process doesn’t go on indefinitely. As we move further down the list of warm years, the probability that a year is warmer than 2014 drops rapidly. Soon we get to the point that it’s so unlikely that a year was warmer than 2014 that we can drop it from our calculation and it makes no difference. The probability that 2014 is warmer than 2010, 2005, 1998 and 2013 is 50%. If we compare 2014 to the other nine of the ten warmest years the probability that it comes out on top is about 47%. If we go further down the list than that the probability doesn’t change. 47% is therefore the probability that 2014 is the warmest year on record.

If we do the same analysis for a different, but equally excellent data set, we’ll get a slightly different set of probabilities, but the basic pattern will be the same. In this case 2014 has about 39% probability of being the warmest year on record.

We can repeat these analyses focusing on other years (is 2010 the warmest? 2005? 1998?) and in each case the probability will be lower than for 2014. That was all a bit tedious, but based on this simple analysis it turns out that no year is more likely than not (greater than 50% probability) to be the warmest year on record. On the other hand, we know that one year has to be the warmest, which is, if you are so inclined, pleasingly paradoxical as questions of probability often are.

We can rephrase the question and ask which year has the highest probability of being the warmest year? The answer based on these two data sets is 2014. As one blogger (I can’t remember who) put it, no year has a better claim.

All of the above needs the rather large caveat: “based on these two data sets” and “based on this particular method”. The probabilities I calculated depend on the data set and on the method. Change either one, change the probabilities. We could look at other data sets, such as those produced by Berkeley Earth (who declared 2014 a tie with 2010 and 2005), or the ECMWF reanalysis (which had 2014 in the top 10% of years in their reanalysis, nominally third warmest). Cowtan and Way look poised to put 2014 in second place. There’s no way to rigorously combine all this information to get a single best answer to any of the questions we might want to ask, but it does underline the fact that there is uncertainty and that it is limited.

For example, there’s no data set of global surface temperature that places 2014 outside the top four years based solely on best estimates. Based on those data sets that have uncertainty estimates, it is very unlikely that 2014 is outside the top 10. It’s quite unlikely that it’s outside the top 5.

So, 2014 was a very warm year. Was it a top 10 year? Yes. A top 5 year? More likely than not. The warmest?

Maybe.

1. Unless the thought-provokingly-fine tuning of various fundamental parameters stretches as far as global-mean temperature. On earth. In the 21st century. This has not, to the best of my knowledge been previously suggested. You saw it here first, folks.

1.5. There are lots of different estimates of global temperature and, obviously, in each of those there will be a year that is warmer than any other.

2. The textbook example is the carpenter’s maxim: measure twice, cut once.

3. Usually. The exception would be if a large fraction of them recently had cause to synchronize their watches, something that Hollywood would have me believe occurs a short, and presumably well-measured, period before it all kicks off

4. To do this we assume that the distribution of errors is Gaussian – the famous bell curve – with mean equal to the best estimate and standard deviation equal to the estimated 1-sigma uncertainty. Errors are considered to be independent from year to year. This is a lot simpler than the real world is, but it will give us an intuition for what’s going on and how uncertainty interacts with rankings. This analysis is a lot simpler than NOAA used too. Consequently, the probabilities I get will be somewhat different.

[Update: 4/2/2014 corrected 2005 global temperature anomaly for NCDC’s data set. Was quoted as 0.54 now, more correctly, 0.52]

[Update: 6/2/2015 First, the date immediately above this one is wildly wrong. Second, Lucia pointed out that the mystery blogger who said no year has a better claim was, in fact, Nick Stokes. Third, Significance has reposted this here.

0 0 votes
Article Rating
175 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
The Ghost Of Big Jim Cooley
July 13, 2015 4:16 am

Was 2014 the warmest year?
We don’t know. Some people think they know. Some people think we know all about climatology. Whenever I have had to discuss climate with someone who wanted my opinion, I say we don’t know. So we cannot possibly know if 2014 was the warmest year since records began. Anyone who says it definitely was, is a liar.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 4:23 am

I left finding out who it was until after I wrote. Now I know, the actual title was, ‘Was 2014 the warmest ever year’. Even worse. So this person’s article title is at variance with the text.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 4:44 am

Ah. The original would cost me a minimum of $6, it seems.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:05 am

Roger that, Bob.

Editor
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:50 am

I did a Google search for a string that yielded two hits – this post and one other, that did not have “ever” in the title and is free. Still haven’t found the $6 reference! I’ll share my search string, I’m rather proud of it, after Bob provides the link.
My complements to the original author for a very well written explanation understandable to a wide range of people.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:15 am

Ric, I trust others will respect Bob’s wishes and not click on it, but here is the link. It’s $6 to rent (whatever that is), $15 to see it in a cloud (again, whatever that means), and $38 to download it as a PDF (I know what that is). I Googled it too, but obviously you and Bob know where it hangs out for free. When I Googled it, I left ‘ever’ in, and evidently shouldn’t have.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:18 am

Sorry, will leave link out until Bob completes this thread.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:22 am
Editor
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 8:50 am

About my search string. I look for “big” “sciency” words that are unlikely to appear in other documents, especially where two or three occur in a phrase. In this case I lucked out with |”Non. Nein. Niet. Nopety”|. Okay, not very sciencey, but nopety would do just fine.
2 results (0.58 seconds)
Search Results
Was 2014 the warmest year? | Diagram Monkey
https://diagrammonkey.wordpress.com/…/was-2014-the-warmest-year/
Jan 31, 2015 – Non. Nein. Niet. Nopety, nopety, noooooo. The number of years in the global temperature record which definitely aren’t the warmest is quite …
A Return to the Question “Was 2014 the warmest year …
wattsupwiththat.com/…/a-return-to-the-question-wa…
Watts Up With That?
5 hours ago – Non. Nein. Niet. Nopety, nopety, noooooo. The number of years in the global temperature record which definitely aren’t the warmest is quite …

rgbatduke
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:24 am

There is a fundamental problem with the analysis, especially extended back to 1850. Specifically, the error estimatet for the present is around 0.1, but around is not the same as exact. Furthermore, the basis for the error estimate itself is an estimated basis — it has a number of assumptions built into it and it is not, by any stretch of the imagination, the standard deviation of a set of independent and identically distributed samples drawn from a stationary distribution. It does not have an axiomatic basis — the error estimate itself has biases in it that cannot be independently estimated because they are based on assumptions that cannot be independently tested.
To make this clear, let’s consider HadCRUT4, as it is a dataset I have on hand — including its error estimates. Here is the line for 1850:
1850 -0.376 -0.427 -0.338 -0.507 -0.246 -0.542 -0.211 -0.518 -0.239 -0.595 -0.162
The first number is the “anomaly”. I don’t want to discuss the difficulties of using an anomaly instead of an absolute estimate of global average temperature but IMO they are profound. Nevertheless, it is important to remember that this is what we are doing in the discussion above, Bob, because the uncertainty in the actual global average temperature is “around” 1 C, not 0.1 C. So when the article asserts “warmest year” what it really means is “highest anomaly” computed “independently” of the actual global average temperature which is paradoxically much less precisely known.
The last two numbers are the supposed lower and upper bounds on the temperature estimate. One has to assume that these bounds are some sort of “95% confidence” interval, but of course they are not, not really, because the error estimate is not based on iid samples and hence there is no particularly good reason to think that the central estimate is normally distributed relative to the true temperature, oops, I mean “anomaly”. It is also the case that the other entries are supposedly error estimates as well that are somehow combined into the last two numbers, and hence the uncertainty in the uncertainties is likely compounded. Nevertheless, we see that the anomaly in 1850 could have been as low as -0.595 and has high as -0.162. A bit of subtraction and we see that HadCRUT4 estimates the anomaly in 1850 to be -0.376 \pm 0.216 with approximately symmetric error estimates. 0.216 is not particularly close to 0.1 — in fact it is over twice as large.
Let’s consider the line for 2014:
2014 0.555 0.519 0.591 0.532 0.578 0.456 0.654 0.508 0.603 0.445 0.666
This line may not be current — they keep tweaking the numbers as the next global meeting to address global warming draws near — but it is what I downloaded at my last opportunity. Note that the anomaly is pretty close to 0.555 \pm 0.110. Each year comes with its very own error, and the errors vary from 0.08-ish to 0.12-ish in the 2000s and not quite twice that in the 1800s.
This is a serious problem. Error estimates for 1850 of only 0.2 C compared to contemporary error estimates of 0.1 C are simply not credible. They are in-credible. One, the other, or both are absurd. To put it bluntly, there is no way in hell that we know the global average temperature, or the global average temperature “anomaly” — almost as precisely in 1850 as we do today (where within a factor of 2 in the error estimate is absolutely “almost as precisely”. For one simple thing, a rather enormous fraction of the Earth’s surface was still terra incognita in 1850. Phenomena such as El Nino and the Pacific Hot Spot that dominate the temperature estimates for 2014 would have passed unmeasured in 1850 — El Nino itself had not yet been observed or named. Antarctica was basically totally unexplored. The North Pole — far more accessible than the South — was not reached until the 20th century, although attempts to reach it date back into the 19th. Africa, South America, much of Australia, western North America, Siberia, China, Southeast Asia, and the bulk of the Pacific and South Atlantic Ocean — rarely visited to totally unexplored, and certainly not routinely sampled with reliable equipment and methodology for temperature. Look how much NOAA changed its anomaly this year on the basis of “corrections” to SSTs measured by ships (ignoring the one source of truly good data, the ARGO buoys). Now imagine the measurements being made in wooden sailing ships by indifferent ship masters along whatever sea routes happened to be travelled in any given decade.
In my opinion the error estimates for the anomaly in the 19th century are understated by at least a factor of 3. The error estimates for the first half of the 20th century are understated by a slightly smaller but still large factor, perhaps around 2. I’m not entirely happy with error estimates of 0.1 C for contemporary measurements — not given the long list of “corrections” that have been and continue to be made that produce variations of this order and the spread in the different anomaly estimates. This might be a standard deviation (if this has any meaning in this context) but it certainly is not a 95% confidence interval, not with a spread of anomaly estimates that differ by this general order.
All of this becomes painfully obvious if one actually looks at and compares global average temperature estimates instead of anomalies. We do not know the current global average temperature within a full degree C, not at 95% confidence. The temperature record we have is sparse over much of the globe today, although with ARGO it is finally starting to become less sparse. This record has been “adjusted” to within an inch of its life, to the point where if one plots the adjustments against carbon dioxide level in the atmosphere, they are linearly correlated with $R^2 \approx 1$, which a sensible person would interpret as (literally) statistically incontrovertible evidence of substantial bias in the adjustment processes used. Because it is impossible to use it to form an accurate estimate of global temperature, it is manipulated to return an “anomaly” with respect to an arbitrary and supposedly self-consistent baseline that itself is only known to some precision.
I agree with Nick’s assertion that perhaps no year has a better claim than 2014, but I have to categorically reject the assertions of precision in the computation of probabilities. The claim of 2014 is nowhere near 40% likely to be correct. I’d be amazed if it were 5% likely to be correct.
rgb
[Well, they did pay (a lot of money) for those adjustments to the modern temperature records, but $R^2 \approx 1$ ?? 8<) .mod ]

John Peter
Reply to  rgbatduke
July 13, 2015 6:21 am

Sounds right to me. I wonder what Steinbeck would have said to the depression of the thirties temperatures in current surface temperature records.

Old'un
Reply to  rgbatduke
July 13, 2015 6:50 am

A great post. It is a global tragedy that the Climate Science community is blinded to the use of plain common sense in data analysis, by its obsession with supporting the CAGW hypothesis. Far from ‘saving the planet’, their blindness is likely to cost its inhabitants dear.

ferdberple
Reply to  rgbatduke
July 13, 2015 6:52 am

The uncertainty on that estimate is about 0.10°
==============
that was also the first thing that struck me. the uncertainty in the global average temperature is not known. if it was anything like 0.10° there would be no need for the ongoing adjustments.

Reply to  rgbatduke
July 13, 2015 7:19 am

As usual you do a great job of explaining why the Emperor’s clothes are not as regal as described, in fact they are worn and dirty tattered scraps, yet they parade him around and expect us to be in awe of the regal wear.

Harry Passfield
Reply to  rgbatduke
July 13, 2015 8:11 am

RGB: Just wanted to say how much I do enjoy reading your posts. OK….reading/reading/reading – based on my stats level… 🙂

Latitude
Reply to  rgbatduke
July 13, 2015 8:13 am

or to put it in common sense terms…
The uncertainty is also an estimate..
…which throws out the baby and the bath water

Reply to  rgbatduke
July 13, 2015 9:42 am

“The first number is the “anomaly”. I don’t want to discuss the difficulties of using an anomaly instead of an absolute estimate of global average temperature but IMO they are profound.”
We use absolute temps. Not a problem, easy peasy. your concerns about anomalies…
no so profound.
Theorizing aside, break out the keyboard and real data. demonstrate conclusively that its “profound”
it’s not.

Reply to  Steven Mosher
July 13, 2015 10:37 am

We use absolute temps

I must not understand this, as you’ve told me many times not to average absolute temps (which while I do, I don’t do anything with them, I mostly use day to day differences).
Maybe (m a y b e , yep got all the letters) you can explain this, in real sentences, if you don’t mind 🙂

cerescokid
Reply to  rgbatduke
July 13, 2015 9:44 am

Reading this comment by rgbatduke and the post on statistics from yesterday reminded me how a persons intuition, while not explainable and hardly ever articulate, can still sort out the BS. It is comforting to see others finding problems with much of what is being passed off as climate science. Any time rgbatduke offers up some thoughts they are a worthwhile read. And then reading all the comments from the Robust statistics post was just icing on the cake. A delicious offering for a rainy morning.

GeneDoc
Reply to  rgbatduke
July 13, 2015 10:21 am

Lovely rant rgb. Love it. Specious accuracy (and precision)! Downright delusional thinking by people who really oughta know better. It’s really a completely ridiculous task, computing (and re-computing and kriging and nudging) a global average temperature. Even ARGO is too sparsely sampled, but the powers that be need a number. Wouldn’t it be great if we had a temperature series from satellites? Oh. Wait.

PaulID
Reply to  rgbatduke
July 13, 2015 11:39 am

Thank you sir while I assume that you normally communicate with people with a higher level of education than I have your comment was worded perfectly for the common man to understand for remembering that not everyone who reads this has had a college level education it’s greatly appreciated.

Patrick B
Reply to  rgbatduke
July 13, 2015 12:56 pm

Well said. My complaint from day 1 has been the failure of the climate community to recognize proper error analysis. Without it, all the data, analysis and theories are useless. We do not know – it’s that simple.

DD More
Reply to  rgbatduke
July 13, 2015 3:55 pm

rgb “All of this becomes painfully obvious if one actually looks at and compares global average temperature estimates instead of anomalies. ”
When NCDC did try to use / compare temperature estimates they got this. And remember 1998 was the spike year.
Current – The combined average temperature over global land and ocean surfaces for May 2015 was the highest for May in the 136-year period of record, at 0.87°C (1.57°F) above the 20th century average of 14.8°C (58.6°F),
1) The Climate of 1997 – Annual Global Temperature Index “The global average temperature of 62.45 degrees Fahrenheit for 1997” = 16.92°C.
http://www.ncdc.noaa.gov/sotc/global/1997/13
(2) 2014 annual global land and ocean surfaces temperature “The annually-averaged temperature was 0.69°C (1.24°F) above the 20th century average of 13.9°C (57.0°F)= 0.69°C above 13.9°C => 0.69+13.9 = 14.59°C
http://www.ncdc.noaa.gov/sotc/global/2014/13
14.8 >> 16.92 << 14.59
Which number do you think NCDC/NOAA thinks is the record high. Failure at 3rd grade math or failure to scrub all the past. (See the ‘Ministry of Truth’ 1984).

Reply to  rgbatduke
July 13, 2015 8:05 pm

“I must not understand this, as you’ve told me many times not to average absolute temps (which while I do, I don’t do anything with them, I mostly use day to day differences).
Maybe (m a y b e , yep got all the letters) you can explain this, in real sentences, if you don’t mind :)”
Like I said.. we USE absolute temps and we produce a record that is ABSOLUTE temps.
but we dont average and neither should you.. if you want a best estimate.
So, do not average average temperatures. USE them.
The approach is simple, willis showed you one approach in his post on temperature and latitude.
In words. You have a sample of measurements at lat and lon and elevation.
The goal of spatial statistics is to predict the temperature at UNSAMPLED locations using ALL
the information you have.
What information do you have : temperature, data, latitude, longitude and elevation.
The first step is to create a model that predicts the temperature at all locations given the information you have.
T = f(lat,lon, elevation, time)
if you do that regression you will see that over 90% of the variance is explained by those variables.
That means you can predict the temperature at unsampled locations… just stick in lat lon and elevation
and the time.
This surface we call “the climate” It never changes. its deterministic. Google Koppen and you will have an idea about what we mean by climate: The temperature at a location and time of year. Think tropical climate
When you build this predictive model there will be a error between what you model says and what the data says. That’s called a residual.
That residual contains the “part” of the temperature that is “random” or changing. This is the weather.

rgbatduke
Reply to  rgbatduke
July 14, 2015 5:49 am

We use absolute temps. Not a problem, easy peasy. your concerns about anomalies…
no so profound.

Let’s just say that I am skeptical about the precision of the anomaly compared to the precision of the absolute temperature. Let me see if I can find the NASA page on this, I think I bookmarked it… here:
http://data.giss.nasa.gov/gistemp/abs_temp.html
This is, as far as I can tell, Hansen’s own apologia for the fact that the model spread of global average surface temperatures is around 1 C at the same time that the anomaly is supposedly known to order of 0.1 C. Note well that everywhere else in the science of measurement, susceptibilities/differences are known to less precision than the absolute quantities, for the simple reason that one has to add the errors when computing two things, plus the fact that one loses (relative) precision rapidly when subtracting two large quantities to make a small quantity.
This sort of thing is also a featured topic, BTW, in the book How to Lie with Statistics. Seriously. If the graph of surface temperature was presented to scale, the entire anomaly couldn’t be resolved to much better than the thickness of the line used to print the graph, as evident here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/to:2015/offset:287/plot/hadcrut4gl/from:1850/to:2015
Considerably less “alarming”, eh? Especially if one adds a 95% confidence interval around the global absolute temperature of roughly plus or minus 1 C (if the model spread of only a handful of models is a full degree C) — the line would be over twice as thick as the total variation.
So the real question is whether or not we can reasonably believe that thermometers measure temperature differences more accurately than they measure temperatures, especially given the average of the latter over many, many thermometers. Maybe. But an order of magnitude reduction in the error — indeed, the presentation of the error at a scale vastly smaller than the observed variance in the data — not so easy to believe.
And in any event, an error of 0.2 C in 1850? Care to comment on that one? IMO that is absolutely, completely absurd. You’ve looked at the data, what do you think? Given the huge blank spaces on the map, either measuring the temperature in those spaces does not matter (much) to computation of the global average whatever, or else this assertion of error is absurdly wrong. I would value your comment on this particular observation.
rgb

RACookPE1978
Editor
Reply to  rgbatduke
July 14, 2015 6:03 am

Sobering summary of the problem.
Politics aside – although political control is the very center and the ultimate cause of the problem’s “solutions” – Hansen’s desperation to “paint the global” in red temperatures has led him to his analysis methods and his distorted Mercator maps.

Reply to  rgbatduke
July 14, 2015 6:26 am

So the real question is whether or not we can reasonably believe that thermometers measure temperature differences more accurately than they measure temperatures

What I decided to do because of this, use a single stations thermometer as the reference thermometer to calculate the change at that station. It’s absolute value is questionable, even the calibration for change is questionable, but it’s likely to most accurate method to determine change at the station, so I calculate a difference value from the min and max values that station records, so Tmnday2 – Tmnday1 = MnDiff1, Tmnday3 – Tmnday2 = MnDiff2, Tmxday2 – Tmxday1 = MxDiff1, Tmxday3 – Tmxday2 = MxDiff2, (Tmxday2 – Tmnday2)= Trise1, (Tmxday2 – Tmnday1)=Tfall1
Then I select specific stations based on samples collected by year, and by area.
I think this is a superior method than all of the other temp series. I do no infilling, no homogenizing. The results for an area isn’t a field average (like BEST provides), that requires estimating a value for the entire area, most of which is unmeasured. My results are the values the stations recorded for that area, I think this is the best that can be done with surface station data, and it tells a different story, it tells us that no matter the effect of Co2, the Planet is able to cool, in fact for the most part global averages as recored at the surface stations hasn’t really changed a lot, heat has moved around, and I think this with all of the infilling creates a warming trend that isn’t there.
This is the average day to day change, Rising temp (as a proxy for temperature), and calculated Solar Forcing at each station.comment image
Now average diff is the average of all of the day to day changes recorded, I added an annual average for each station (as opposed to by day) and the number of stations so you can see how the number of stations has changed to rgb’s point about past error.comment image
All 7,000 stations in 2014 show significant cooling (if you divide the annual value by the number of stations, you get the same daily difference value)

Reply to  rgbatduke
July 14, 2015 6:03 am

RGB: “indeed, the presentation of the error at a scale vastly smaller than the observed variance in the data — not so easy to believe.”

http://qualityamerica.com/images/ebx_-2138538623.jpg

rgbatduke
Reply to  rgbatduke
July 14, 2015 6:47 pm

Joel D. Jackson
July 14, 2015 at 6:03 am

For independent, identically distributed (iid) samples drawn from a common, stationary distribution.
Now try again, where the samples are not independent, are not identically distributed, are not stationary, and are not drawn from a common distribution. Also compare the variance year to year and model to model. Finally, define N.
rgb

Reply to  rgbatduke
July 14, 2015 7:12 pm

I’m sorry RGB. I was under the impression you understood some of the statistical underpinnings of measurement theory.
“N” is the number of observations used in estimating the population mean.

The higher the number of observations, the lower your standard error for a given sigma.

taz1999
Reply to  rgbatduke
July 15, 2015 1:45 pm

Well it’d been a crying shame to pay for the adjustments and not get some type of record. At least there is some ROI.

July 13, 2015 4:17 am

This crucial conference in Addis Ababa quietly occurring this week http://www.un.org/esa/ffd/ffd3/conference.html is the true reason for hyping 2014 temps so forcefully. We need to be watching since we are the ones on the menu and financing the whole gourmet dinner too.

July 13, 2015 4:31 am

The Surface Temperature Data Sets cited by the author are NOAA and GISS.
However, it is evident that:
NOAA .EQ. CRUD .AND. GISS .EQ. CRAP
MAYBE 1934 was the warmest year since 1850.
Earth was certainly warmer during the Medieval Warm Period and the Roman Warm Period.
There is nothing unusual about global average temperature in 2014.

Paul Westhaver
July 13, 2015 4:34 am

2014 was arguably not the warmest year.
Also, so what if it was, The earth generally has been warming since the last ice age.
Is the earth warming (maybe but seems nope) due to CO2 emissions that are a consequence of only human activity? No.
Would it be great if the earth warmed? Yes, but it isn’t.
The CAGW issue is a socialist ploy to justify a new planetary tax to move money from the rich to the UN who say they will use it for the poor. A lie to service a lie in a get rich quick scheme.

Reply to  Paul Westhaver
July 13, 2015 9:43 am

what letter in “maybe” is confusing you?

Reply to  Steven Mosher
July 13, 2015 10:44 am

That is maybe why he used the word “arguably”?

old construction worker
July 13, 2015 4:40 am

According to a few scientist we are heading into another LIA by 2030. Well, maybe. like most of us they do not have a crystal ball or all of the data needed to make that judgment. It may take another 6000 years of observations that prediction. All I know every time the best physicists model the universe, the models are not quite right. The universe keeps throwing them a curve ball.

daveandrews723
July 13, 2015 4:40 am

Unfortunately, reality does not matter in the “man-made global warming/climate change” debate. Only perception matters. And that is why this will go down as a very dark period in the history of science… perception is more important (even to the scientists) than reality.

commieBob
July 13, 2015 4:44 am

Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?
The blogger didn’t mention the satellite record. The satellite record has to be much more reliable than any of the surface station data sets.
When the surface temperature needs infilling, why don’t they use the satellite record to inform the decision? I’m not saying to use the satellite data directly but it should be possible to use it to generate a delta from a surface station.
This whole thing is garbage piled on trash piled on rubbish piled on …

Editor
Reply to  commieBob
July 13, 2015 5:58 am

I agree that the satellite record is more accurate overall than the ground record, but they measure different things. Most notable is the lag in the satellite record of developing El Ninos. Not too important when comparing a years worth of data. Also, the target audience is a much wider set of people than those who understand the multiple datasets and biases among them.
As for your original question, I’m inclined to think that gov’t funded research is aimed at providing the best bogus data for the Paris COP money can buy.

Editor
Reply to  Ric Werme
July 13, 2015 6:06 am

Better answer – my test for articles written for the general public was “Would my mother understand it?”
If commieBob incorporated multiple datasets into the essay or if rgbatduke used the greater error of older measurement, the result would be far less understandable to her.
I spent years (on and off!) figuring out how to explain what a computer software “race condition” was (Mom didn’t have a home computer) and was insufferably pleased with myself when I came up with a good analogy.

Reply to  commieBob
July 13, 2015 10:01 am

“Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?”
1. The are two entirely different Beasts.
2. The Surface record is a combination of SAT ( 2meter thermometer records) and SST
— a patische of bucket, bouys , hull intake.. etc.
3. The satellite record is a conglomeration of multiple sensors over a short time period. It is NOT
a direct measurement of temperature. temperature is DERIVED using a physics model and
SIMPLIFYING ASSUMPTIONS about the atmosphere. It is adjusted in some cases by a GCM.
4. The surface record is a non random sampling of Minimum temps and maximium temps.
5. The satillite record isnt Min and max.. The sensor has two equator crossing ( ascending and descending ) and different patches of the earth are sampled at different times. Not min and max.
6. Balloons are super sparse. they are done in very few locations.
The bottom line is that they these two records “measure” different things in different ways. They both
are constantly being revised and improved. The satellite records have changed more than other records.
That alone should tell you somethng about structural uncertainty.
Further, IF they matched that ALONE would tell you NOTHING of interest. Similarly if they dont match that ALONE will tell you nothing.
For example. Suppose you found that the satellites warmed at 1C while the surface warmed at 2C.
What does that tell you?
Nothing more than this: the two records disagree.
You might argue, for example, that this difference MEANS that the surface record is infected by UHI.
However, that’s an Inference and not a fact. That’s an explanation of the difference, however there are other explanations: Figuring out WHICH explanation is correct is not straightforward.
So
Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?
1. because nothing much turns on the question.
2. because any answer to the question is going to be MORE questionable than the records themselves.
3. Cause its really fricking hard work for small scientific return.
That said there is some work that has been done, and I have a bunch of work on the topic. Nothing worth publishing.

wayne Job
Reply to  Steven Mosher
July 13, 2015 7:18 pm

Mosher, These temperature anomalies that so worries people, are a statistical artefact calculated using dodgy data and dodgy methods. The temperature standard for us humans is 98.4F but that varies widely even in healthy people. The thing I noticed with these temperature data sets is that some of them use a different temperature start point, this is rather odd, but that said they guess the temperature of the world and work from there.
This means that the anomalies mean nothing and the average start temperature could be seriously wrong, taking into account the proven warm periods in our history maybe we should add a degree or two to the start temperature, this would mean we are running a serious deficit. I might suggest that we are about a degree colder than what is conducive to a happy world.
Believe nothing of what you hear, nothing of what you read and only half of what you see, in this way over time truth can filter in to the grey matter.

rgbatduke
Reply to  Steven Mosher
July 14, 2015 6:55 pm

All pretty reasonable statements. OTOH, it is worrisome if the satellite record and surface temperature record systematically diverge, as this seems as though it would violate — eventually various principles of physics. Indeed, it is worrisome that the existing, growing divergence is strictly in the direction of relative warming of the surface record.
Don’t confuse a random variation of one over and under the other — fluctuating around a common trend — from a systematic, growing deviation, as they are not the same beast, and the “scientific return” isn’t necessarily small for explaining it. Especially given the truly excellent correlation between CO_2 increases, the adjustments in the surface temperature record, and the deviation. One might well be forgiven for seeing the triple play here is pretty much certain evidence of bias in the corrections.
Or not. In terms of p-values, though, or confidence intervals (take your choice) the null hypothesis that both are accurate unbiased representations is pretty unlikely.
rgb

Reply to  rgbatduke
July 14, 2015 7:06 pm

I forgot about this, and iirc when I posted this before it might have been Mosh who said it didn’t mean much.
But i thought this looked a lot like the satellite temperature seriescomment image
Now it isn’t going to be exact, it’s land based only, but if people agree it is a decent match for satellite, I answered Mosh’s request on why they’re different, it’s the published surface records processing, which I’ve been saying for a while now 🙂

David A
July 13, 2015 4:57 am

It is not quite right to ask if 2014 was the warmest year “ever”, and then say the very questionable surface adjustments are out of the equation, as well as the record divergence from the satellite data sets. (not a fair fight at all)

General P. Malaise
Reply to  Bob Tisdale
July 13, 2015 6:52 am

I applaud your (and the other contributors here at WUWT) work.
I do hope that you understand that it is a one way conversation. Most of the people here know there is no man made warming threatening our survival. there is a cabal of elitists who are threatening our survival and their minions are ideological. Even if you could get them to listen they would not hear what you are saying.
This battle has been going on for some time and where the truth needs to be spoken is in the grade schools and high schools of the west. The ideologues have them (the youth) now and if the battle isn’t brought there science will lose.
http://thefederalist.com/2015/07/06/the-new-totalitarians-are-here/#.VZ7vinx34HA.mailto
the above link illustrates what common sense is up against. It isn’t only the weather they try to control, they want, they insist that you fall in line or else they will attack you.
FACTS mean nothing to them.

David A
Reply to  Bob Tisdale
July 13, 2015 7:31 am

I do understand. However my objection is to the question the discussion purports to answer; was 2014 the warmest year ever? And IMV, that question can not be answered by limiting the discussion to surface data sets, and ignoring UHI adjustments etc, etc. The question may be more accurately phrased as, within the error margins of the surface data sets, and blindly accepting all the adjustments as valid, was 2014 the warmest year ever? (The answer is informative about the scientific process and error margins, but it does not answer the original question.)
The surface and the satellites have historically followed a parallel pattern within a certain spread.
The continuously growing divergence between them, as well as numerous other factors about the surface data sets, such as the declining number of stations in the data base being used, meaning greater homogenization as one example, all lead to the divergence as evidence of increasing error margins within the surface data.
I do appreciate the focus on some aspect of error margins, but I do object to it in anyway answering the question, “was 2014 the warmest year ever?”
Best regards
David A

David A
Reply to  Bob Tisdale
July 13, 2015 7:36 am

Correction to above post.Yes, I know the question is, Was 2014 the warmest year ON RECORD, not “ever. Perhaps I was just influenced by how the media spins it, but my message remains valid.

gbaikie
Reply to  Bob Tisdale
July 13, 2015 11:15 am

When you count that Urban Heat Island Effect increases the average air temperature of local region significantly and when consider more than 1/2 of humans live in region where there is this significant warming from the UHI effect.
Can it be said that for most human beings, the air outside their homes has been the highest air average temperature in year 2014 since the time human discovered the use of fire?
One problem is the humans have moved out of tropical region into cooler regions. Or today most humans live in the Temperate Zone, rather than the Tropical Zone, and the Tropical Zone has a more significantly higher average temperature than Tropical Zone.
Also I don’t know if UHI effect has much effect upon the average temperature in the Tropics as it does in the Temperate Zone.
So maybe one could say, for people living in the Temperate Zone, 2014 or the 21 Century has had been the warmest years.
And in terms of the the Earth’s oceans, which are more related to Earth’s average temperature, rather average temperature human’s experience. since the time of the Little Ice Age which ended around 1850 AD, the ocean temperatures have warming, are currently the warmest they have been for a couple centuries. Or sea Levels have risen, and large part of this rise in Ocean temperatures is due to the thermal expansion. The Human species which began in Africa, has lived through many periods when the ocean average temperature has been quite a bit warmer than today’s ocean. Such as last interglacial, the Eemian period. Or for animals which don’t live in urban areas, the world over last few million years, on average, has been cooling. With the up tick since the end of the Little Ice Age, as a moderately warmer period.

General P. Malaise
July 13, 2015 5:03 am

it was not the warmest not even close judging by my pepper and tomato crop. and this year looks like it is on track to be below average.

Glenn999
July 13, 2015 5:09 am

Wouldn’t it make more sense to look at smaller geographical areas? Perhaps there are some areas that are the “warmest ever”. But averaging hot spots with cold spots to determine a global number seems to me to degrade the significance of the data. No?
Thanks Bob.

noaaprogrammer
Reply to  Glenn999
July 13, 2015 11:20 am

Also, one could ask: “What is the warmest stretch of 365 days?”
…and using strategically placed instruments: “What is the largest value when integrating a year’s worth of temperature over time?”

Reply to  noaaprogrammer
July 13, 2015 11:34 am

…and using strategically placed instruments: “What is the largest value when integrating a year’s worth of temperature over time?”

Just in case you missed itcomment image
I integrate the day to day change in temp for stations with a minimum of 360 days of samples per year, and then average them. I haven’t uploaded the latest reports or code so I can’t point you to the sample size, but let me provide the 1940-2014 numbers.
Min temp average -0.00413011F
Max temp average 0.001059264F
Now this is great, here’s the Sums
Min temp -0.309758281F
Max temp 0.079444773F
I left the extra digits of precision to allow the reader to round to their own preferences, NCDC claims the measurements are +/-0.1F
Those sums are from 72992776 daily samples, so the sum of 73 million day to day temperature readings since 1940 is a fraction of a degree F, so you could argue that there’s less than 0.1F increase in max temps since 1940, and min temps have dropped -0.3F.
The temp series that are published and have more warming than this are made up!
And this is with their adjusted data.
IMO this, using their own data proves there is no global warming period.

Reply to  noaaprogrammer
July 13, 2015 11:40 am

There are 15,313 unique stations.

Reply to  noaaprogrammer
July 13, 2015 11:52 am

Let me make a correction on the sum’s, they are the sums of the annual sums. Not the sum of all 73 million samples.
Okay
This is the sum of all min temps -455994.3F
Sum of all max temps 11656.9F
These values are the sums listed above divided by the number of samples.
Min temp -0.006247115F
Max temp 0.000159699F

Ivor Ward
July 13, 2015 5:13 am

……Peak waffle.
( commieBob
July 13, 2015 at 4:44 am )

July 13, 2015 5:39 am

Yes, this is a good explanation of basic, fundamental statistical analysis that is (or should be) the foundation for any advanced-level course of study. What irks me is the fact that it is a fundamental idea that is often bypassed or ignored in the popular presentation of ideas in almost every post-hoc field of study (eg. climate science, economics, nutrition, medicine, psychology). I suppose it’s boring and difficult to explain and doesn’t garner click-throughs or ad sales but its absence changes the meaning of an explanatory article.
Really, we should be teaching the idea of uncertainty (in an appropriate form) from an early age and reinforce it again and again throughout primary education. We can’t force people to understand but we can do better to make (most, ie ±5% 19 times out of 20) people more aware of the reality of what we can know from empirical observation.

July 13, 2015 6:26 am

Think about the question we are asking and why.
The question: Was 2014 the warmest year on record?
Why ask it: Because it would indicate the warming that is predicted by those scared of AGW.
And if we refine that question to look at the thing we really want to know we get a new question:
New question: Was 2014 indicative of exceptional warmth and the feared hockeystick?
Answer: No, definitely not. We can’t tell if was the warmest or not but we can definitely tell that we can’t.
So it can’t be too far out there.

dmh
Reply to  M Courtney
July 13, 2015 10:08 am

I’m amused by the whole charade.
We live in a world where:
DAILY temperature ranges can be 20 deg C
ANNUAL temperature ranges can be 80 deg C
GEOGRAPHICAL temperature ranges (pole to equator) can be 120 deg C
So debating if any given year is the “hottest” by a less than 1/10th of one deg C is akin to choosing a hay stack at random and calculating that chances that there is a needle in it. It doesn’t take the explanation in the article, or rgb’s explanation (which as he noted, is predicated on the erroneous assumption that an average can be calculated at all) to figure out the ugly truth. If the change is so small that you have to debate its existence at all, it is a pretty safe bet that it just doesn’t matter.

Reply to  dmh
July 13, 2015 10:38 am

DAILY temperature ranges can be 20 deg C

The average of a large number of surface stations is ~18F

Richard G
Reply to  dmh
July 13, 2015 2:20 pm

In Eastern California, daily ranges for different stations separated by less than 200 miles are commonly 60f-80f. For individual stations it can be 30f-50f.

Alx
July 13, 2015 6:27 am

…and the unmeasurable, might-as-well-be-mythical, actual global temperature

Nice to see the term “mythical” used in relation to global temperature. How else could you describe such an elusive vaguely defined entity?

David Chappell
Reply to  Alx
July 13, 2015 7:36 am

I like to think of global temperature and and its anomalies as the climate equivalent of the square root of minus one – an imaginary number – but without its equivalent usefulness in mathematics

JimB
July 13, 2015 6:29 am

It seems to me that in an interglacial period, such as the one we are now experiencing, the trend would be for each year to be warmer than the last. Until the trend is broken and we are once again heading for an ice age. Is this naive thinking?

MikeB
Reply to  JimB
July 13, 2015 6:55 am

Temperatures in an interglacial tend to rise relatively quickly, then to fall back slowly to glacial conditions. This is the normal pattern.
The temperature peak in the current interglacial was attained about 8000 years ago and is known as the ‘Holocene Climatic Optimum’.
“…data indicate an extended period in the early to mid-Holocene when Scandinavian summer temperatures were 1.5 to 2 ºC higher than at present…… the mean July temperature along the northern coastline of Russia may have been 2.5 to 7.0 ºC warmer than present”
Since that time temperatures have tended to decline.comment image

gbaikie
Reply to  JimB
July 13, 2015 11:45 am

Perhaps in terms of ocean temperatures.
Or in terms of surface temperatures, it seems the average temperature have been falling for most of our interglacial period [8000 years]. Or the Holocene Maximum had significantly higher average surface temperatures. Wiki:
“The Holocene Climate Optimum warm event consisted of increases of up to 4 °C near the North Pole (in one study, winter warming of 3 to 9 °C and summer of 2 to 6 °C in northern central Siberia).[1] The northwest of Europe experienced warming, while there was cooling in the south.”
https://en.wikipedia.org/wiki/Holocene_climatic_optimum
It should be noted regarding wiki quote, that tropics don’t change much even during a glacial period- when average global temperature is 10 degrees cooler.
And glacial periods are *mostly* about the northern hemisphere [where most of World’s land mass is and where most humans live]

John Peter
July 13, 2015 6:31 am

Despite the disclaimer in the blog post I don’t like the setting aside of the “Mann made adjustments” to the various surface records. I cannot wait for Senator Inhofe to start his senate enquiry into the adjustments made by NOAA and GISS. Tony Heller has been posting on these now for years, but his blog posts are individually short and disjointed. WUWT should arrange for a proper analysis to be carried out in conjunction with The Global Warming Policy Foundation. In my opinion, the “homogenization” of global temperature records by NOAA/GISS/HADCRUT is the key issue in the settling the amount of Global Warming occurring- if any.

Reply to  John Peter
July 13, 2015 9:34 am

There was a serios paper based on a Greek Ph.D thesis presented at the 2012 European AGU. It looked at all (163) long running (100 years) fairly complete (no more than 10% missing data) global GHCN stations. No geographic bias; US was deliberately undersampled to get fairly uniform global coverage. It compared raw to homogenized and proved without any statistical doubt that there is a warming bias in the NOAA GHCN homogenization. Assuming the sample is representative of the GHCN whole, then about 1/3 (~0.25C) of the 1900 tp present warming is artificially induced by homogenization. Essay When Data Isn’t gives some examples from the paper.
Both the 2012 presentation and essay were one of two separate submissions to the GWPF initiative. The other looked at GISS homogenization for all CRN 1 USCHN stations in yhe surfacestations.org archive. Reached the same qualitative conclusion. 9 of 10 ideally sited suburban/rural stations were warmed by GISS homogenization. A root problem is the regional expectation basis of the PHA algorithm, which does not distinguish between well and poorly sited stations.

July 13, 2015 6:35 am

Let’s see, if I got this right, since 1850, 2014 is the year with the highest probability of being the warmest year but that probability is just 47% so it is more probable (53%) that it is not the warmest year. What a good bet, whichever side you take you win . . . just be careful to not be too specific about what you mean by ‘warmest year’ then defend your side of the bet and take the money.

MikeB
Reply to  John G.
July 13, 2015 7:00 am

If I toss a coin 1000 times the most likely outcome is that it will come down heads 500 times, but it is much more likely to be some other number.

Michael 2
Reply to  MikeB
July 13, 2015 10:22 am

Indeed — you can calculate a probability for each number of times the coin comes up “heads” in 1000 tosses. It probably forms a Gaussian distribution. Thus it is most likely to be 500 times, slightly less likely to be 499 or 501, extremely unlikely to be 1 or 999. But when you compare the liklihood of it being 500 to any of the other 999 choices, you have a weighted sum of all the other choices that vastly exceeds the liklihood of being exactly 500.

itocalc
Reply to  MikeB
July 13, 2015 11:59 am

For what it is worth, the binomial distribution is Gaussian as the number of trials increases to infinity. The probability of getting exactly 500 heads out of 1000 coin flips is 2.5225%. The probability of getting 500 or fewer heads is 51.2613%

mobihci
July 13, 2015 7:11 am

this post is like saying, please ignore the elephant in the room (satellite) and try to work out what the floor would look like under it. anyone that claims giss is an “excellent data set” has been drinking the koolaid, no doubt about it. what is the point of playing into these peoples fantasies?!
was 2014 the warmest? no.

The Ghost Of Big Jim Cooley
Reply to  mobihci
July 13, 2015 7:21 am

Indeed, there isn’t an excellent dataset. If there was, there would only be an argument about whether man is causing it or not 🙂

July 13, 2015 7:30 am

My comment is why are we trying to figure out if 2014 was the warmest year based on manipulated data?
It is a waste of time and effort.

Ed Zuiderwijk
July 13, 2015 7:35 am

If 2014 had been made by Karlsberg it probably would be ……..

Charlie
July 13, 2015 7:37 am

What are the chances of 1998 being the warmest year in the last 100? is there any chance that the warmest year was in the 1930’s? or is that blasphemy to suggest at this point? I don’t know what to make of the warmest year claims in the 21st century considering all the data alterations leading up to this century and then of course including the ones in this century. I would have to assume all of those to be sound adjustments. Even if I do I’m not sure what a very slightly warmer year that could be the warmest means in correspondence to co2 considering climatic observation going back to the little ice age and the warming thereafter. I realize these claims are for the media. That is another reason I’m very skeptical.

tomwys1
July 13, 2015 8:16 am

The question NOT raised is: “Was 2014 a dangerously warm year?”

knr
July 13, 2015 8:43 am

‘We have to estimate it from, sparse and, occasionally unreliable measurements. ‘
And on this quicksand they built a castle of ‘settled science’ , now that is amazing .

BruceC
Reply to  knr
July 13, 2015 8:36 pm

LOL … sorry, couldn’t resist.

Matt G
July 13, 2015 8:53 am

What ever year becomes the warmest with the surface temperature data means very little with such frequent data adjustments during the decades. All they do is change the method of data collecting when a period shows no warming, to try a get what little warming there is from this change in future. The fact is the rate of warming is so little even with a record warm year in future that is falsifies the original scare of global warming. The only data sets that real scientists should take notice are of the satellite and balloon data.The surface data are changed so much that the previous decades have not been comparable for ages.
The main difference in world temperature between the 1930’s and now for example is not down to the planet being any noticeably warmer, but human adjustments consistently increasingly make it look like there is a difference. The year 1998 has been frequently changed over the years with surface data sets, slowly increasingly making it cooler. If the data was no good in 1998 and it needed frequent future adjustments, then the data is also no good now. Even the almost coldest months ever recorded in recent years for the UK, the Met Office make it look like a touch below average for the climate data.

Scott
July 13, 2015 8:53 am

Is this entire discussion moot?
The satellite record does NOT put 2014 as near the warmest year at all since its inception in 1979.
The UAH and RSS records are arguably far more accurate than the land data for many reasons, beyond the scope of this comment.
Are the terrestrial temperature records so manipulated as to be of any value whatsoever?
If the research which may come out in the next year shows that data to be manipulated in such a way as to intentionally warm the recent past and cool the intermiadiate past (Early to mid 20th century – as many suspect), the entire discussion becomes meaningless – moot.
The real and burning question is – what about that data? (land based).
That’s the real “hot” question.
I for one am certainly looking forward to see the GWPF reports.
Evidence now seems to strongly exist that Australian data, South American and North American data have been consistently “massaged” to suit the needs of the alarmists?
We shall see……

July 13, 2015 9:01 am

I think all of this (above), while mildly entertaining to the tempdata set cognoscenti, is a bit like arguing how many angels fit on the head of a pin after a large set of assumptions are agreed to and swallowed as the starting point.
It is a head-fake deflection of the skeptics into the weeds. We lose sight of the fact that the AGW-vested interests will continue to make whatever dubious adjustments needed to erase the pause and at least make the appearance of an upward global T trend. If you don’t think so, go back and read Initials notes #5 and then #1. Head fake. A tabloid scandal to keep people distracted while the AGW vested interests and their politicians plan the sheep slaughter.
The only real questions are those related to CO2’s role in Earth’s climate temperature control. These are the questions that Lord Monckton and others ask concerning sensitivities and what that is to within even +/-0.5 degC (compare that to differences argued in the above essay) are at heart of exposing the Climate Hustle.
Then that CO2 sensitivity discussion requires understanding why there are so many hard and undeniable reasons the CMIP GCM-derived outputs are utter garbage (tuned with implicit circular logic, amplification of total error by propagation-iteration of many small errors, non-uniqueness of results, post hoc Cherry picking subsets of outputs, etc).
And even this doesn’t even begin to delve into the likely role of solar variability since 1850.
No, the ?? of 2014 is simply a red herring distraction.

TonyL
July 13, 2015 9:09 am

OK, so what was the point of this post?
Was it just a trivial introduction to the statistical uses of the Monte Carlo simulation?
In measurement, we take a few givens: A = 15.55, B=15.57.
We say B is greater than A, they were measured with the exact same instruments, with the same suite of possible errors, with the same mix of uncertainty and certainty. Whatever you say about the uncertainty of A applies *exactly* to B. B measured larger than A, on an empirical, quantitative scale, so we say B is larger.
If you want to talk about about anything in the real world, a few points come up.
1) Why are we talking about probabilities (in this context) now. The issue has never come up before. The argument outlined above, has always held sway.
2) Was the data set corrupted. If you accept the data as given, for any argument, you *imply* that you agree with the data, as given. You have just allowed the person who provided the data to frame the debate.

This is not a discussion of adjustments to surface temperature data.

Oh dear, I am off topic.
3) Why the *huge* divergence between surface and TLT. Not just actual, but even the trends diverge radically. This has never been such a huge issue, but now it is too large to ignore, and no explanations have been offered.

The topic of discussion is surface temperature data, not lower troposphere data.

I seem to be off topic again.
Analysis is a lot more than throwing some statistical methods at some random data set, and then making claims. Some of you by now, may have discerned that I have a big problem with applying analysis effort to anything as corrupted as the surface record.

Michael 2
Reply to  TonyL
July 13, 2015 10:16 am

TonyL asks “OK, so what was the point of this post?”
KO, it was to instruct readers on the meaning of “uncertainty” and methods of calculating probabilities while assigning meaning to the outcomes of these calculations.
It is likely to fail to please very many people most of whom wish for something more certain, even though as the writer portrays, most people are perfectly comfortable with a degree of uncertainty in their lives. A wristwatch that is within 5 minutes of the correct time (as assigned by NIST/NBS for instance) is acceptable for most ordinary purposes.
When digital watches first came out I was meticulous in setting it to the exact second according to the time hack on WWV. I was still late for church but at least I knew exactly how late (within a second or so anyway) and assuming the meeting started exactly on time, which is usually not the case.

Louis Hunt
July 13, 2015 9:11 am

Does anyone know if recent adjustments to the NOAA temperature data set exceed the error bars on their original data set? If so, does NOAA have an explanation?

Reply to  Louis Hunt
July 13, 2015 9:37 am

Yes. Yes. No. Essay When Data Isnt.

TonyL
Reply to  Louis Hunt
July 13, 2015 9:39 am

Your question has two answers.
A) YES, original error bars were exceeded in the late 1990s. Unfortunately, no longer provable, as the record has been scrubbed.
B) NO. Corrections have errors associated with them as well, and pile up. After all the corrections and adjustments, nothing could exceed the error bars. You could take the icy cold of outer space, or the heat of the inner depths of hell itself, and you would not be outside the range of estimates. As an added bonus, no matter what happens, you can always claim “just what the model predicted” without telling a lie.

dp
July 13, 2015 9:13 am

Let’s say you have 2,000 watches, stopped randomly. You go through all the same math as above and you will have found that depending on when you ask what time it is, the probability is quite good the calculated result will be within 24 hours of exact.
Are we all tired of this guesswork yet?

Editor
Reply to  dp
July 13, 2015 10:48 am

For the analog watches commonly used, I’d say the result would be within 6 hours of exact. E.g. a worst case would be that the stopped watch shows 3, the actual time is 9.

MikeB
Reply to  Ric Werme
July 13, 2015 1:00 pm

That’s interesting Ric, I was going to cut it down to within 12 hours but, for my watch, you are correct. It is always within 6 hours of the right time.

Joe Chang
July 13, 2015 9:18 am

on this
“the Earth was a glob of molten rock pummelled by other rocks travelling at the kind of speeds that made Einstein famous, dinosaurs late and a very, very, very loud bang”
the meteor that impacted at Chicxulub was probably not traveling faster than 30km/s relative to earth, which is Earth’s orbital speed around the sun.
Wikipedia has the Chicxulub impactor at 4×10^23 J, and 11km in diameter. For simplicity lets assume a cube 10km per side, which is 10^12 cubic m. Iron has a density of 7 ton per cu m.
so we are looking at 7 x 10^15 kg, so this means velocity is approx. 10^4 m/s
which is hardly relativistic (some non-negligible fraction of 3×10^8 m/s)

rgbatduke
Reply to  Joe Chang
July 14, 2015 7:16 pm

7 x 10^15 kg,

Definitely non-relativistic. I usually present estimates of impact velocity as equal to escape velocity as a reasonable lower bound. This is convenient because the escape energy is very close to 64 MJ/kg. This is nice because it is the blast equivalent of 20 kg of TNT. So to convert any falling mass to “tons of TNT” equivalent explosive, one simply multiplies the falling mass by 20 (or more, if you want assume 30 km/sec instead of 11 km/sec) and divide by 1000 (for a metric ton). So this mass would be 1.4 x 10 ^17 kg of TNT, or roughly 10^14 to 10^15 metric tons of TNT. The largest nuclear explosion set off by mankind was around 5 x 10^7 tons of TNT (the Tsar Bomba, at around 50 MT). This is order of 10 million Tsar Bombas exploding all at once (or as much as an order of magnitude or two more, depending on what speed you want to assume and how you treat the last order).
A 10 km cube is right around the size likely to cause serious mass extinction. Just a bit larger and the only things likely to survive are the microbes living deep in the Earth’s soil and life forms at the bottom of the sea that are pretty much disconnected from the state of the surface.
rgb

Eliza
July 13, 2015 9:34 am

The Met Office, GISS, NOAA, Hadley all have an agenda and have manipulated the data to show warming.The evidence for this now is overwhelming. NONE of their data is the slightest bit credible. Yes Goddard was right, Yes Homewood was right, Mahorasy ect, J Nova, Giever, Dyson ect How much more must we go before the Lukewarmers get it?

July 13, 2015 9:41 am

Thanks, Bob. A very interesting post.
I don’t think it is very important to know if 2014 was the warmest year if it is by a very small amount. With the ongoing El Niño I would expect it to be the warmest in the plateau of temperatures after 1998, or shortly thereafter.
What I would not like too see is a definite cooling trend.

Michael 2
July 13, 2015 10:10 am

Informative and entertaining at the same time, just a touch of that wry British humor. You could almost imagine this a “Monty Python” skit.

July 13, 2015 10:11 am

Yes, rehashing 2014’s harbinger status is a distraction. Just 5 days ago the media was sizzling with articles alerting the world that Germany’s all-time record high temperature had been shattered by a staggering 0.1°F. That is currently all you need to know about climate change.

Craig Moore
July 13, 2015 10:17 am

In 50 years will anyone even care? http://www.newyorker.com/magazine/2015/07/20/the-really-big-one Spending billions of $$ merely analyzing the nits and nats of temperature surely could be put to better use in preparing to adapt to a very dynamic and unforgining environment.

Reply to  Craig Moore
July 13, 2015 2:46 pm

In 50 years we may well regret the push to go “paperless”. Without that push we’d have more to burn to stay warm.

Reply to  Gunga Din
July 13, 2015 2:48 pm

In 50 years we may well regret the push to go “paperless”. Without that push we’d have more to burn to stay warm.

But second hand smoke would just kill us from cancer.

Bart
July 13, 2015 10:21 am

It seems most people lose sight of the fact that whether 2014 was the hottest year on record or not, it provides no guidance as to why it might have been. Attributing it to the rise in CO2 levels is still post hoc ergo propter hoc.

July 13, 2015 10:28 am

If you look at the highest recorded temperatures for the 7 continents and Oceania, then get an average of the year recorded, it comes out to 1939.25. (Hey, I never had a course in statistics).
http://www.infoplease.com/ipa/A0001375.html
Of the 51 “states” including the District of Columbia, 10 of the record highs were posted in 1936. 23 of the states had record highs somewhere in the 1930’s.
https://en.wikipedia.org/wiki/U.S._state_temperature_extremes
Also most of the record highs in Canada were in the 1930’s
https://en.wikipedia.org/wiki/List_of_extreme_temperatures_in_Canada
But then again there were record cold temperatures in the 1930s – so who’s to say?
But if 2014 was the warmest year ever since 1850, shouldn’t at least one of the continents, or one of the states, or provinces, have recorded an all time record high temperature?
I’ll vote for 1936 as the hottest year since 1850. (or maybe 1934)

Reply to  J. Philip Peterson
July 13, 2015 11:25 am

There were three all-time state high temperature records set in 1994, in AZ, NV, and NM. They were all set at recently-opened stations in the hottest parts of their state (in the case of NM, at a nuclear waste dump). Since then, extreme highs have been declining at all three.

Reply to  J. Philip Peterson
July 13, 2015 12:18 pm

1934 stands out in western station records and is hottest yearly average for some stations. California’s yearly average in 2014 was 2°F hotter than previous hottest of 1934 and 1996 (tied at 60°F), but there were zero all-time extreme record highs.

July 13, 2015 10:42 am

Yet once again I am reminded that we don’t know what the average temperature of the planet earth really is. Our best efforts of today with all the technology we have and all the money spent might be within a couple of degrees of the answer. After all, huge parts of the planet do not have a thermometer reading temperature and those ground stations that we have are way too often located in an airport or otherwise poorly sited.
And what if we could read the temperature accurately today? What about the past that we know was hit and mIss? How in the hell does anyone think we can put together an historical data set that is supposedly accurate to hundredths of a degree? Madness! Foolishness! Idiocy!
All of that is on top of the fact that temperature of the surface air is the wrong metric anyway. The total energy of the system needs to be measured and we are not even trying to do that. What is the point of it all? Seemingly all we are trying to do is convict our industrialized society of climate murder.

bw
July 13, 2015 10:47 am

Surface data are corrupt, as stated, due to UHI and un-verifiable adjustments.
Rejecting known bad data by sampling a few known good rural data shows zero warming.
Eg Antarctica shows zero warming since 1958.
NOAA and GISS data could not withstand any independent audit.
Accepting the rgb error estimate for older temp data, here is a plot of land data. With satellite data added to illustrate the divergence since 1979.
http://www.woodfortrees.org/plot/best/scale:3/from:1840/plot/rss-land
The y-scaling is used to give a crude estimate of the errors in land data relative to satellite data.
The 1930s are know to be the warmest decade in the 20th century. There is no evidence that the 2001 to 2014 global temps are significantly warmer than the 1930s

Reply to  bw
July 13, 2015 11:03 am

” … Eg Antarctica shows zero warming since 1958. … ”
I wonder why we are trying to measure the temperature of the whole planet anyway. The Team told us long ago that we could expect heating at the poles to happen first. Why are we not measuring the temperatures at the poles to see if Catastrophic Anthropogenic Global Warming is happening or not?
They got no “hot spot” and they got no “warming at the poles”. What do they got?

Reply to  markstoval
July 13, 2015 11:18 am

What do they got

A complex model of interpolated temperature estimates based on a fraction of the earth’s surface temperature.
Here’s what the surface stations measured.comment image

July 13, 2015 12:35 pm

The original question is stupid: Debating whether 2014 was the hottest year or not is a waste of time.
99.999% of the past 4.5 billion years of “average temperature” data are unknown.
So there are very little data to compare 2014 with.
The ONLY data for comparison are from measurements made during a warming trend.
So of course there will be new records set, as long as the warming trend is still in progress.
That’s the definition of an uptrend.
Maybe not new temperature records every year, but there will be many “hottest year” records until the warming trend ends.
Saying 2104 was the hottest year on record is a deceptive way to report the average temperature — it is a degree or two F. warmer than in 1880, if you can trust the haphazard, frequently “adjusted”, data = so what?
There is no reason to expect the average temperature to be the same when the measurements are made 135 years apart, because Earth’s climate is ALWAYS changing.
– The past 150 years have been a warming trend called the MODERN WARMING.
– All measurements were made during that warming trend.
– Until the Modern Warming ends, the most recent decade is always going to be the “hottest decade on record”, even if the temperature stopped rising a decade ago.
Climate proxy studies strongly suggest there have been hundreds of mild warming / mild cooling cycles between the ice ages.
There’s no reason to believe we will never have another cooling trend — it could have already started, giving the lack of warming in the past decade.
I’m waiting for the “warmists” to explain how multiple ice ages came, and went, without any manmade CO2 present … since they claim CO2 is the “Climate Controller”.
Oh, I guess we’re supposed to believe Earth’s climate has been changing for 4.5 billion years from natural causes … and then suddenly in 1940 — EVERYTHING CHANGED !
Magically, in 1940, the warmists claim (and fools believe) manmade CO2 suddenly, with no known explanation, became the ONLY “climate controller”, and natural climate variations forever stopped !
CO2 is the “climate controller”?
Isn’t is funny how CO2 rose, and the average temperature fell, far more often than both variables rose at the same time (since 1940)
1940 to 1976 = CO2 up and average temperature down
1998 to 2015 = CO2 up and average temperature down
For leftists, there is ALWAYS an environmental “crisis” coming, and the “cure” is ALWAYS more government regulations, and more taxes on the private sector … but the crisis never comes … and it eventually stops scaring people … and then a new crisis is invented
Climate blog for the average guy:
http://www.elOnionBloggle.Blogspot.com

Steve from Rockwood
July 13, 2015 1:28 pm

Anecdotal evidence is always worth a read. On July 13th, 2015 a 10 km wide ice flow sits in the middle of Frobisher Bay. “in the past 10 years we had been out boating since July 1st”. But thanks to an ice breaker…
http://www.cbc.ca/news/canada/north/in-iqaluit-icebreaker-paves-way-for-season-s-supply-ships-1.3149098

Steve from Rockwood
Reply to  Steve from Rockwood
July 13, 2015 1:30 pm

I forgot to add a reminder that warming is greater at the poles.

David A
Reply to  Steve from Rockwood
July 14, 2015 4:40 am

warming is supposed to be greatest at the poles but the south pole has not cooperated at all.

Andrew
July 13, 2015 2:42 pm

Yes, yes, we get it. It was the highest measured and adjusted temp on a couple of terrestrial datasets, within measurement error.
The “tie with 2010 and 2005” thing almost implies that temps have oscillated between a range without rising for a decade, sort of like pausing.
By other measures, such as near record freezing of Great Lakes, snow in Greece and Cairo, record snow in North America etc (oh and the long-term decline in satellite measured temps) it wasn’t. NASA launched the RSS and then refuses to acknowledge its existence.

July 13, 2015 2:49 pm

A Return to the Question “Was 2014 the warmest year?”

Short answer: No. 2015 was.
(At least it will after a few tweeks….)

WilliMac
July 13, 2015 2:53 pm

Has anyone developed a simple test in which concentrations of CO2 to O2
of different amounts, and subjected to sunlight and housed in an air tight containers been measured for temperature differences? Or is this just a dumb question?

Reply to  WilliMac
July 13, 2015 3:38 pm

They will tell you so, but no, it’s not a dumb question.
But you will also be told that a cold object can make a hot object hotter and that you should do a degree in climatology to understand how, as well, so don’t expect any rational answers

rgbatduke
Reply to  wickedwenchfan
July 14, 2015 7:34 pm

But you will also be told that a cold object can make a hot object hotter and that you should do a degree in climatology to understand how, as well, so don’t expect any rational answers

This simply isn’t true. A cold object can without question make a heated volume of space hotter, simply by slowing the flow of heat out of that space. One doesn’t need a degree of any sort to understand this, as it happens every time you put on a jacket. It is fundamentally incorrect to refer to the Earth as if it is a passively cooling, unheated object. It is an object that is in between a very hot place indeed (the surface of the sun) and outer space, which is very cold. Its surface temperature is very much determined in all sorts of ways by the colder air above the surface.
Beyond that, one can come up with a rather long list of colder objects that can slow the cooling of a hot, passively cooling object and that can raise the dynamic equilibrium temperature of an object that is being heated and cooled at the same time (like the Earth).
rgb

rgbatduke
Reply to  WilliMac
July 14, 2015 7:29 pm

Yes it is a dumb question. For the opposite reason you might expect. And no, I can’t explain the theory to you in simple terms because it isn’t a simple theory. But it is almost certainly a sound one.
If you want to understand the theory (and understand enough physics to understand the theory) you can read Grant Petty’s book “A First Course in Atmospheric Radiation”. Chapter 6 contains a comparatively simple explanation of the theory in the form of a single layer model. If you really care, I can direct you to some review papers of the theory as well, but they require some understanding of quantum mechanics.
rgb

July 13, 2015 3:11 pm

“2014 was estimated to be 0.56° above the long term average. The uncertainty on that estimate is about 0.10°.”
If most of the data had a RECORDING accuracy of +/- 0.5 deg C, how can you claim any are highest? A claim of 0.10 deg C is ridiculous.
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf page 11

July 13, 2015 3:34 pm

Taking all the years that the author claims “deifinatetly were not the hottest”, is pseudo science in itself. Ignoring the fact that the error margins given for today are also completly arbitrary, it’s completely irrational to assign the same error margin to years further back in history that had less accurate instruments more sparcely distributed.
This is not science.

July 13, 2015 3:51 pm

Why not use the actual observational records of US, UK, Canada, Ireland, Australia, New Zealand, India, Pakistan and South Africa, as all would be available with notes and comments in English, as the basis? Of course they will be wrong but at least we can honestly and honorably expect the error(s) are random and accidental. Such records may well be pretty complete back to the 1880s. If 2014 wins the contest it would likely win back to 1850.
Perhaps use records from French speaking countries to be unAngloCentric.
A real answer, however much an educated guess. At least a WELL educated guess.

July 13, 2015 3:52 pm

So the conclusion is…. 2014 is probably the warmest on record. Most datasets show 2014 is number 1, many countries had record warm in 2014. And forget about satellites if you want a precise temp : how would check your local temp, with a satellite ?

Reply to  Johan Lorck
July 13, 2015 3:58 pm

You can’t check your local temp with a satellite. In fact satellites don’t measure surface temperatures, they measure the temperature of the lower troposphere.

Reply to  Joel D. Jackson
July 14, 2015 12:59 am

Indeed, that’s what I was trying to say… What I mean is that satellites are not a good way to have a precise temp.

Editor
Reply to  Joel D. Jackson
July 14, 2015 6:20 pm

Satellites have a calibration source – something that radiates at a well calculable temperature that the microwave imager can view. They can check their calibration much more frequently than NWS Co-op sites ever do.
From http://www.drroyspencer.com/2010/01/how-the-uah-global-temperatures-are-produced/ :

HOW THE DATA ARE CALIBRATED TO TEMPERATURES
Now for the important part: How are these instrument digitized voltages calibrated in terms of temperature?
Once every Earth scan, the radiometer antenna looks at a “warm calibration target” inside the instrument whose temperature is continuously monitored with several platinum resistance thermometers (PRTs). PRTs work somewhat like a thermistor, but are more accurate and more stable. Each PRT has its own calibration curve based upon laboratory tests.
The temperature of the warm calibration target is allowed to float with the rest of the instrument, and it typically changes by several degrees during a single orbit, as the satellite travels in and out of sunlight. While this warm calibration point provides a radiometer digitized voltage measurement and the temperature that goes along with it, how do we use that information to determine what temperatures corresponds to the radiometer measurements when looking at the Earth?
A second calibration point is needed, at the cold end of the temperature scale. For that, the radiometer antenna is pointed at the cosmic background, which is assumed to radiate at 2.7 Kelvin degrees. These two calibration points are then used to interpolate to the Earth-viewing measurements, which then provides the calibrated “brightness temperatures”. …

Reply to  Johan Lorck
July 13, 2015 4:07 pm

Are you checking you local records before or after they’ve been adjusted?
My local high and low records have been adjusted.
Try using TheWayBackMachine to see if a past list for your area has been archived and compare it to the latest official list for changes to past. I did. Mine were.

knr
Reply to  Johan Lorck
July 14, 2015 1:30 am

How long is the ‘record’ and what much of that period can honestly be judge against the current period ?
That we can measures accurately to two decimal places now , in no way effects or inability to to do this in the past , nor does the current level of coverage magical effect the past lack of coverage , no matter what ‘adjustments’ are made.
In short even if we stick to the just the record , we find that across its history it may have changed so much we may simply not be able to use it in any meaningful way.
Then we can deal with if we actually even have enough measurement of the right type to currently make this judgement in a scientifically meaningful sense . And that if far from ‘clear’

Reply to  knr
July 14, 2015 1:37 am

Ok, if you believe global estimates are not reliable, you can check by station and you would see a warming trend in most of them also. Some records are very old, like the one in Austria (247 years), but it’s only since 1880 that you can get a reliable global temp.

knr
Reply to  knr
July 14, 2015 2:57 am

Johan Lorck
only since 1880 that you can get a reliable global temp.
even if that time frame you have to question what is meant by ‘reliable ‘ , how is is defined today in numbers terms , may be different to how it was defined 100 years ago when you simply could not get the same level of accuracy.
It is not the case that old data has no value but that old data cannot be viewed in the same light has new data becasue the means of collection that old data was different.
Its like saying that you use a accuracy from a clock in 1860 in the same way you can use a modern atomic clock , because they both measure the same thing. That we have got better at taken such measurements, although with still fall short , does not mean we can forget past problems , no matter how much ‘adjustments’ are throw at it especial when you do not know what you ‘adjusting’ for and how much adjustment you need.
A lot of this is really basic stuff when doing experimental design, you know what is you need to measure and you know what it takes to measure it, if you cannot do the later your value judgements on the former are questionable. No amount ‘faith’ in climate ‘science’ changes that.

July 13, 2015 4:01 pm

Significant digits.
In the field I work in there is such a thing as an “MDL”, that is “Minimum Detection Limit” for a particular test done in that particular lab on a particular parameter.
To make a long story short, to determine a labs “MDL” for a particular parameter the test is rerun several times using the same sample. (I think it’s 50 or so runs?) The most accuracy (decimal places) that can be claimed is that which (I think) is is returned by 95% of the runs.
Some labs have a lower MDL than others. (Some might measure using a graduated cylinder. Some a volumetric flask or pipet. All can effect accuracy.)
How many decimal places could be claimed by turn of the century or even 1930 instruments? What were their or even today’s “MDL”?

knr
Reply to  Gunga Din
July 14, 2015 1:32 am

Oddly most labs probable did have a good idea of this on a day to day basis , however this is the type of thing that is never kept over time.

Reply to  knr
July 14, 2015 4:26 am

Older data is corrected just because it has not been collected the same way, for example recent NOAA corrections for buoys and ship measurements. It is possible to test that.

RACookPE1978
Editor
Reply to  Johan Lorck
July 14, 2015 6:18 am

Johan Lorck

Older data is corrected just because it has not been collected the same way, for example recent NOAA corrections for buoys and ship measurements. It is possible to test that.

So why have recent (1992 – 2014) NOAA measurements been changed every year the past 4 years? Why has no one ever been able to specifically issue a specific SINGLE correctin for TOBS validation to EACH INDIVIDUAL temperature station whose measurements have been tampered in the past.
If ship bucket surface water measurements are more accurate than calibrated scientific buoy sensors, which is what you are claiming, then we should use hardware tape measures to set the international standards for length and surveyor’s points.
No.
instead, you “wave your magic wand” over the entire series of every station and mumble the magic formula “TOBS” and verily every station gets changed. WHEN was each station’s time changed – if it even was changed! – and why is that individual station’s time change not visible as a distinct single event in the record?
Rather, TOBS is used to change every record over every location over a 25 year period. TOBS either happened one time, or it did not happen at all. TOBS did not – could not! – affect every temperature measured from 1880 through 1979.

Reply to  RACookPE1978
July 14, 2015 6:43 am

Yes it can because we are talking about anomalies relative to a mean.
Hopefully, NOAA doesn’t get stick to its firt estimations and correct them anytime it is possible.
You might ask : “why these corrections are always in the warming direction ?”. But it is not, you may not notice when it occurs but NOAA and NASA have corrected recent temps in the cooling direction also.
Another explanation is the week coverage of the poles and we know north pole has warmed fast. Correction was necessary and you could not contest that ocean heat content has pretty much increased.

Reply to  Johan Lorck
July 14, 2015 7:06 am

Another explanation is the week coverage of the poles and we know north pole has warmed fast. Correction was necessary and you could not contest that ocean heat content has pretty much increased.

We know no such thing, there are not a lot of stations in the arctic and in the high north, most are on the coast, so what they see is water temp when it’s melted.
I think this should work, surface station >66N Lat
https://www.google.com/maps/d/edit?hl=en&authuser=0&mid=zFc6ZVuRjl0U.kzWuvcmwCDwA

RACookPE1978
Editor
Reply to  micro6500
July 14, 2015 8:38 am

micro6500

We know no such thing, there are not a lot of stations in the arctic and in the high north, most are on the coast, so what they see is water temp when it’s melted.

KAP MORRIS JESUP_GL
Station=043010,WBAN=99999 Lon= -033.367, Lat= 83.650, Elev= +00040
I have the hourly weather data for Kap Morris Jesup station (latitude 83.65 north) for the years 2010 – 2011 – 2012 – 2013 available. That is too short a period to claim any trends, but …
DMI provides a daily weather (temperature) record for 80 north from all station data up north from 1959 through 2015 at their web site.
NO CHANGE IN SUMMER Arctic AIR TEMPERATURES AT ALL SINCE 1959! If plotted over time, there actually has been a slightly cooling effect at 80 north since 2000 for summer temperatures.
Winter temperatures? Yes, they have increased from -28 deg C to -24 deg C. Therefore, the climastrologists can claim an AVERAGE Arctic temperature increase. But only when the sun don’t shine, where the sun don’t shine!

Reply to  RACookPE1978
July 14, 2015 9:01 am

I have the stations from NCDC north of 66Lat, and it’s just like everywhere else I dig into to, warming is exaggerated manure.
While it doesn’t have 2014 data, and some of my newer fields, you can find reports for 20 some thousand stations here http://sourceforge.net/projects/gsod-rpts/

Reply to  knr
July 15, 2015 4:09 pm

Johan Lorck
July 14, 2015 at 4:26 am
Older data is corrected just because it has not been collected the same way, for example recent NOAA corrections for buoys and ship measurements. It is possible to test that.

“Corrected”? No. It’s been changed (“adjusted” if you prefer).
Just because a computer program can take old numbers (from before the digital age) and arrive at a different value or add a few more decimal places to the old one does mean the computer program’s result is a “correction” or the added decimal places are more significant.

A Crooks
July 13, 2015 4:33 pm

There are two points that I think are of interest here
1/ Was 2014 the warmest? – Answer since we are coming out of the Little Ice Age I would not be shocked if we were. Isn’t that what is the term “Little Ice Age” implies? A period of time when it was colder, and now it isnt. Does the Met Office want to go back to the “Little Ice Age”?
2/ Is the period 1850 to 2014 in any way significant in the context of even recent geological history? – The geological record sugest that the global temperature has oscillated within a fairly narrow range over the last 3.5 billion years and the current golbal tempertures are not the least remarkable. Its all very well to bed-wet about the current climate because we are living in it – but a scientist should be able to step back and look at the ups and downs of the bigger picture.

gbaikie
Reply to  A Crooks
July 14, 2015 12:45 am

-Does the Met Office want to go back to the “Little Ice Age”?-
Apparently they are great fans of idea of ice staking on the Thames.
Which not surprising as they appear childish in so many other ways.
-2/ Is the period 1850 to 2014 in any way significant in the context of even recent geological history?-
Yes, more humans live in larger cities. For instance in 1801 London population was about 1 million
and in 2011 it was about 8 million.
https://en.wikipedia.org/wiki/Demography_of_London#Population_change
In 1801, London would been regarded as significant city in the world for having a large population of 1 million people , whereas a city of 1 million people is fairly common today.
And addition of larger cities, people live more in urban areas than than rural area:
“Today, 54 per cent of the world’s population lives in urban areas, a proportion that is expected to increase to 66 per cent by 2050.”
http://www.un.org/en/development/desa/news/population/world-urbanization-prospects-2014.html
Considering the general characteristic of politician leaders throughout human history, and the dependency urban dweller upon them, a higher percentage of people have become more mad- which is manifested as general yearning not to live in urban areas, but also desiring to live in urban areas {for many practical reasons}.
Therefore tendency to worship global warming and other mad ideas as coping mechanism to ignore actual problems.

KenB
July 13, 2015 5:09 pm

Better question is why the adjusters/homogenizers did not attempt to change the CET temperatures. Too many people watching? Knew they would get caught out? knew that any attempt would cause a storm of scientific protest. If those are the answers, then those craven cowards who quietly and behind the scenes wrecked historical temperature data have a lot to answer for, but over an above that even worse is those that not only knew what was going on but permitted it to happen.
I live for the day that these, I can’t call them “scientists”, are called to account, each and every one of them.
Miserable miscreants or worse? I leave that to those with the power to bring them forward for the explanation of their motivation.

Crowbar
July 13, 2015 5:58 pm

Winters can arrive early, winters can arrive late. Summers can arrive early, summers can arrive late. Each calendar year begins and end during winter (northern hemisphere) and summer (southern hemisphere) so the accumulated temperatures for a given calendar year are affected (at beginning and end) by the timing of winters and summers. Surely this adds even greater uncertainty to the calculation of a year’s “hotness” in comparison to another’s?
A difference in annual temperatures may simply be an artefact of the timing of the seasons.
e.g. allow me to invent a very hot year – in my hot year, the NH winter arrived early (in Nov/Dec of the previous year) such that Jan-Mar was quite mild. At the end of the year, the NH winter arrived late (in Jan/Feb of the next year) such that my Nov-Dec was quite mild. Meanwhile, in the SH, summer at the beginning of that year arrived late (Jan-Mar) and at the end of that year, it arrived early so that Oct-Dec was statistically warmer. Averaged out, the 3 years (with my hottest year in the middle) could have been very average, but the middle year – what a statistical stinker it was! Stinking hot that is.
I’m a novice at this – yet my scientific “gut” says that the moment you segment a time series into months or years, the timing of the seasons will muddy the waters.
Heck, every accountant knows that you can fiddle a year’s financial results by bringing forward or deferring revenue or expenses, depending on the desired result for the financial year. Nature does the same via the timing of the arrival of the seasons.

Reply to  Crowbar
July 13, 2015 6:09 pm

If you find my post in this thread, that’s exactly what I calculate, as well as summarize 1940 to 2014, and there’s not much of a change in temp.

Jeff Norman
July 13, 2015 6:58 pm

Was 2014 the warmest year on record?
Not where I live. The relatively good 76 year record suggests 2014 was cooler than the 30 year average and the 76 year average.
Of course non of these temperatures have been homogenized or corrected yet.

Joe Bastardi
July 13, 2015 8:09 pm

So NCEP, measuring the earths temp every 6 hours based on the based available data and their gridding system, is out to lunch with these measurements. Why is the actual result of the data on a 6 hourly basis ignored?
http://models.weatherbell.com/climate/cfsr_t2m_2005.png
2014 not even the warmest year in the last 10

mobihci
Reply to  Joe Bastardi
July 14, 2015 2:18 am

hi Joe, your pic shows up “access forbidden”. got a public link?

donald penman
July 13, 2015 11:34 pm

The average surface temperature of the earth going up or down does not prove that carbon dioxide caused it, It is only consistent with the theory believed by many climate scientists. If the maximum temperatures measured at any location shows a clear increase when carbon dioxide increases in the atmosphere then this would be evidence that the mechanism proposed for carbon dioxide is correct but if we don’t see that rise in maximum temperatures then it would be evidence that there is no forcing from carbon dioxide. We suspect that temperature records are being altered to show this relationship ,I give the case of the Australian temperature record as a recent example .

David Cage
July 14, 2015 12:18 am

My objection is rather different.
I knew someone who kept temperature records near the Lyneham one and consistently got lower reading by about a degree. When the air base closed the differential dropped to half a degree.
I do not believe that any reading taken in the greater London area can be considered valid any longer as a comparison with earlier ones but any within five miles of Heathrow is a definite non starter.
On top of this one of my friends did a study of the effectiveness of the Stevenson screen as an enclosure given various standards of air cleanliness and found that the cleaner air resulted in a near half degree increase in measured temperature with clean act quality air relative to the air quality measured in the same spot in 1950. Surprisingly even the air near Heathrow which seems disgustingly oily smelling to me is better than the average for the area in the fifties.
It is an unfortunate fact that the research value of data collection is low for a scientist so little or no effort is put into making sure this is of even low commercial grade let alone the top quality that should be required for science justifying multi billion pound subsidy policies.

July 14, 2015 1:13 am

Even with the margin of error (0,1°C), 2014 would still be a top 10 year. The strong El Nino 1998 added more than 0,2°C of global warming.
So yes, with the margin of error and El Nino warming you can compare 1998 and 2014.
But you could also look the other side. The margin of error could be +0,1°C for 2014, climbing at +0,78°C. Without El Nino, 1998 could have been -0,2°C, falling at +0,42°C.
So +0,78°C in 2014 ; +0,42°C in 1998. And by the way, the year to date anomaly is +0,76°C in 2015 and El Nino has not peaked yet.
I’m sorry but on a statistical point of view you can say 2014 was warmer than 1998.

mobihci
Reply to  Johan Lorck
July 14, 2015 2:28 am

honestly, what is the point of having an error margin smaller than the adjustments made to the raw data?
first, you accept there is an error of x.xx with the raw data that must be ‘adjusted’ out, then you turn around and say that your result has more accuracy. in other words, you believe in yourself.. nothing to do with the actual data, just your ‘adjustment’. so, you just dream up accuracy? what world do you people live in?!

WPeszko
Reply to  mobihci
July 14, 2015 3:16 am

The point is to have small error margin. First, I accept there’s a SYSTEMATIC error that must be adjusted out. Obvious.

richardscourtney
Reply to  mobihci
July 14, 2015 5:58 am

WPeszko:
You say

The point is to have small error margin. First, I accept there’s a SYSTEMATIC error that must be adjusted out. Obvious.

In a science lab. the point is to understand the data and what it indicates.
Therefore, in a science lab. when a SYSTEMATIC error is detected then the cause of the error is corrected and the measurement is repeated; failing ability to do that then, in a science lab., the total inherent error range is reported.
So, in a science lab., an error is corrected or reported and is never – not ever – “adjusted out”.
In a political assertion any error is excused as having been “adjusted out”. Obvious.
Richard

WPeszko
Reply to  mobihci
July 15, 2015 3:57 am

@richardscourtney
When a SYSTEMATIC error is detected and if it’s possible to correct – it’s corrected. If not, it’s adjusted out. How do you correct Earth rotation?

richardscourtney
Reply to  mobihci
July 15, 2015 4:30 am

WPeszko:
You ask me

@richardscourtney
When a SYSTEMATIC error is detected and if it’s possible to correct – it’s corrected. If not, it’s adjusted out. How do you correct Earth rotation?

I don’t “correct Earth rotation”. Any scientist would define, measure and report “Earth rotation” together with an error estimate for the measurements.
How do you correct “correct Earth rotation”; with a hand-brake?
I repeat what I said and you are commenting but failing to dispute.

In a science lab. the point is to understand the data and what it indicates.
Therefore, in a science lab. when a SYSTEMATIC error is detected then the cause of the error is corrected and the measurement is repeated; failing ability to do that then, in a science lab., the total inherent error range is reported.
So, in a science lab., an error is corrected or reported and is never – not ever – “adjusted out”.
In a political assertion any error is excused as having been “adjusted out”. Obvious.

Richard

WPeszko
Reply to  mobihci
July 15, 2015 7:26 am

“Any scientist would define, measure and report “Earth rotation” together with an error estimate for the measurements.”
Then the final value would be adjusted for that. It’s just your imagination that a systematic error is not adjusted out, nothing to dispute.

richardscourtney
Reply to  mobihci
July 15, 2015 9:50 am

WPeszko:
You make an unacceptable assertion in response to me

“Any scientist would define, measure and report “Earth rotation” together with an error estimate for the measurements.”
Then the final value would be adjusted for that. It’s just your imagination that a systematic error is not adjusted out, nothing to dispute.

NO! ABSOLUTELY NOT! HOW DARE YOU!?
I said and I repeated

In a science lab. the point is to understand the data and what it indicates.
Therefore, in a science lab. when a SYSTEMATIC error is detected then the cause of the error is corrected and the measurement is repeated; failing ability to do that then, in a science lab., the total inherent error range is reported.
So, in a science lab., an error is corrected or reported and is never – not ever – “adjusted out”.

The fundamental principles of metrology are NOT my “imagination”. It seems that you don’t know what is meant by defining a parameter and defining a measurement method. When a method provides a result then that result is a datum which is reported together with its method of determination and an estimate of its error range.
When measurements have been conducted then no scientist would “adjust” the obtained data: only a charlatan would do that.

Richard

Wojciech Peszko
Reply to  mobihci
July 15, 2015 11:47 am

“When measurements have been conducted then no scientist would “adjust” the obtained data: only a charlatan would do that.’
Somehow you imposed that adjusting data for errors means original data is lost/not reported. Therefore you’re arguing now with yourself, not me.

richardscourtney
Reply to  mobihci
July 15, 2015 12:12 pm

WPeszko:
I am “arguing” with nobody. I am telling you that you are wrong.
You are refusing to accept or discuss what I have written.
You now say

Somehow you imposed that adjusting data for errors means original data is lost/not reported. Therefore you’re arguing now with yourself, not me.

That ignores the fact that in climate science the original data IS “lost”.
Importantly, no scientist would “adjust” data because she thinks it has errors: only a charlatan would do that. A scientist would attempt to repeat the measurement in a manner that does not include the error or – failing ability to do that – would report the suspected source of error along with the data and estimated error range.
Richard

Wojciech Peszko
Reply to  mobihci
July 15, 2015 11:49 pm

1. Data isn’t lost just because WUWT says that.
2. Every scientist would adjust data (final values), knowing a proven systematic error. I’m happy you finally agreed with me “would report the suspected source of error along with the data and estimated error range.” And then would report raw & adjusted data.
3. You says you’d correct Earth rotation. How?

Reply to  mobihci
July 16, 2015 1:26 am

Dear richardscourtney,
The guide to the expression of uncertainty in measurement says the following:
3.2.3 Systematic error, like random error, cannot be eliminated but it too can often be reduced. If a systematic error arises from a recognized effect of an influence quantity on a measurement result, hereafter termed a systematic effect, the effect can be quantified and, if it is significant in size relative to the required accuracy of the measurement, a correction (B.2.23) or correction factor (B.2.24) can be applied to compensate for the effect. It is assumed that, after correction, the expectation or expected value of the error arising from a systematic effect is zero.
http://www.bipm.org/en/publications/guides/gum.html
Best regards,
John

Reply to  mobihci
July 16, 2015 2:52 am

WPeszko: Just to clarify, It’s not WUWT that says the data is lost.
It’s Phil Jones – the guy who lost the data
Here is an interview with Nature where he acknowledges that he shouldn’t have done that.

richardscourtney
Reply to  mobihci
July 16, 2015 3:21 am

John Kennedy (@micefearboggis):
Yes. Thankyou.
As your quotation says

If a systematic error arises from a recognized effect of an influence quantity on a measurement result, hereafter termed a systematic effect, the effect can be quantified and, if it is significant in size relative to the required accuracy of the measurement, a correction (B.2.23) or correction factor (B.2.24) can be applied to compensate for the effect.

I explain this together with an analogy to aid comprehension in Appendix B of this.
Of course, such a correction is nothing like “adjustments” to climate data that occur almost every month, that are also reported in the item I have linked, and that have this effect.
If adjustment of data to match a ‘known’ result were acceptable then there would be no reason to conduct measurements; the ‘known’ result could be stated instead.
Richard

richardscourtney
Reply to  mobihci
July 16, 2015 3:25 am

Wojciech Peszko
You say to me

3. You says you’d correct Earth rotation. How?

I said no such thing!
In fact I ridiculed your asking me how I would do it when I replied

I don’t “correct Earth rotation”. Any scientist would define, measure and report “Earth rotation” together with an error estimate for the measurements.
How do you correct “correct Earth rotation”; with a hand-brake?

Richard

Walt D.
July 14, 2015 5:12 am

If you look at the satellite data for the lower troposphere, 1998 was clearly the warmest year since we had actual global temperature data. This makes sense, since it was an el Nino year. If you ask what was the warmest consecutive 12 months, you get the 12 months ending in November 1998. December 2014 ranks 79 th. 2010 ranks much higher – October 2010 ranks 9th.

Reply to  Walt D.
July 14, 2015 6:10 am

Satellites exagerate el nino effects and again you would not use satellite to have a precise local temp. So why would it be different for a global temp ? Satellites are usefull to infill uncovered regions though.
And uah long term trends show nearly the same warming as surface stations that’s because el nino and la nina have a net zero effect (if you omit ocean heat storage).

Walt D.
July 14, 2015 5:22 am

BTW – I understand this is off topic. I posted it just to give a reference point.

RobW
July 14, 2015 8:54 am

For decades 1934 was the hottest year on record. Then after repeated adjustments it is no longer even in the running. Not buying sorry.

Walt D.
July 14, 2015 9:51 am

If you are going to confine yourself to land based temperatures, you expect temperature readings to increase over time regardless of any naturally occurring increase in temperatures. If 1880 we did not have airports, parking lots, skyscrapers and all the other things that cause urban heat islands. However, land based temperatures only cover 1/3 of the globe (much less if you include Siberia and Antarctica). There is also the question of biasing the results to the northern hemisphere where the landmass is larger.

Reply to  Walt D.
July 14, 2015 10:39 am

If you are going to confine yourself to land based temperatures, you expect temperature readings to increase over time regardless of any naturally occurring increase in temperatures. If 1880 we did not have airports, parking lots, skyscrapers and all the other things that cause urban heat islands. However, land based temperatures only cover 1/3 of the globe (much less if you include Siberia and Antarctica). There is also the question of biasing the results to the northern hemisphere where the landmass is larger.

My focus has been day to day changes in temp, daily and annually, always comparing a station to itself. From this I set requirements for the number of readings as well as the number of years at those annual samples. Lastly I can select stations in an area, I’ve done approximately by continent, 1×1 degree blocks, 10 degree lat bands. But the big issue with everywhere but the northern hemisphere is they have poor coverage, even now. But you can still look at the stations and see how much they changed day by day. In particularly comparing how much the temp went up yesterday, and then dropped last night. That to me seems to be the key to this, did it cool as much last night as it warmed yesterday, and the answer is on average it did.comment image
This is day to day change, daily Rising temp, and Solar forcing. Forcing and rising are scaled to allow you to see how day to day temps changed.
Here’s day to day averaged by station.comment image

Lars P.
July 15, 2015 7:58 am

If these “very good global temperature data set”(s) would not retroactively adjust the past, I would be more inclined to read and take posts about “warmest year” more seriously.
Seriously…
http://notrickszone.com/2015/07/07/noaas-data-debacle-alterations-ruin-120-years-of-painstakingly-collected-weather-data/
https://stevengoddard.wordpress.com/2014/08/12/what-part-of-this-isnt-clear-3/

July 16, 2015 12:03 pm

There’s a 38% chance that in 2015 I will meet Sofia Vergara and she will become my love slave.
I consider that a done deal.