A Return to the Question "Was 2014 the warmest year?"

Guest Post by Bob Tisdale

UPDATE: The author of the post has now been listed at the end of the Initial Notes.

# # #

This is a repost of a blog post written by a well-known and well-respected climate scientist. To date, it is one of the best answers I have come across to the often-asked question, “Was 2014 the warmest year?” What sets it apart from most articles is its down-to-Earth discussion of probabilities.

INITIAL NOTES:

  1. This is not a discussion of why 2014 might be warmest. For that, you’ll need to refer to the blog post here.
  2. The data discussed in the following post is the old version of the NCDC data, not the newly revised NCEI data introduced with Karl et al. (2015).
  3. The topic of discussion is surface temperature data, not lower troposphere data.
  4. This is not a discussion of adjustments to surface temperature data. It is also not a discussion of the slowdown in global surface warming.
  5. The basis of the discussion is: given the surface temperature data we had in hand at the end of January 2015, could we say that 2014 was the warmest year?

I would like the content of the post to be the topic of discussion on the thread, not the author. If you know who the author is, or have taken the time to search for the blog in which the following post appears, please do not identify the author by name. Later in the day, I will provide an update with a link to the original post and let you know who the author is.

UPDATE

The author of the blog post in John Kennedy of the UK Met Office. He blogs occasionally at DiagramMonkey. The original of the post was published on January 31st.

[End preface. The repost follows.]

The question of whether 2014 was or wasn’t the warmest year has recently exercised the minds of many. The answer, of course, is…

No.

At some point in the past, the Earth was a glob of molten rock pummelled by other rocks travelling at the kind of speeds that made Einstein famous, dinosaurs late and a very, very, very loud bang. There have also been periods, more hospitable to life (of various kinds), where global temperatures were in excess of what they are today.

However, if we narrow the scope of our question to the more conventional and cosmically brief period covered by our instrumental temperature record – roughly 1850 to now – the short answer is…

Maybe.

This has been an answer to a Frequently Asked Questions on the Met Office website (http://www.metoffice.gov.uk/hadobs/hadcrut4/faq.html) and has been the source of occasional ridicule.

That’s fine.

Obviously, one year was the warmest1. In other words, according to some particular definition, the global average of the temperature of the air near the surface of the Earth in 2014 or some other calendar year was higher than in any other. Unfortunately, we don’t know what that number is for any particular year. We have to estimate it1.5 from, sparse and, occasionally unreliable measurements. Some of them made with the help of a bucket.

That gap, the gap between the estimated value and the unmeasurable, might-as-well-be-mythical, actual global temperature is the reason for the “Maybe”. This is a common problem familiar to anyone who has attempted to measure anything2. If you are unfamiliar with it, ask a room full of people what time it is. You’ll get a range of answers3. These answers will be clustered close to the actual time, but not exactly on it. Most people are used to living in this chronological fog of doubt. They allow for the fact that watches and reality never line up precisely.

For global temperature (or any other measurement for that matter) we don’t know exactly how large that gap is, but we can by diverse methods get a reasonable handle on what kind of range it might fall within. Most people’s watches are within five minutes either side of the “right time”. Or, to put it another way, the right time is usually within five minutes either side of what most people’s watches say. That range is the uncertainty.

The good news is that, armed with this uncertainty information for global average temperatures, there are some years, for which the answer to the question “Well, what about this year, could this year be the warmest?” is, resoundingly, undoubtedly, 100%: No.

Non. Nein. Niet. Nopety, nopety, noooooo.

The number of years in the global temperature record which definitely aren’t the warmest is quite large. I would go so far as to say, it’s most of them. Here, for your enjoyment, is a list of definitely-not-the-warmest years.

1850, 1851, 1852, 1853, 1854, 1855, 1856, 1857, 1858, 1859, 1860, 1861, 1862, 1863, 1864, 1865, 1866, 1867, 1868, 1869, 1870, 1871, 1872, 1873, 1874, 1875, 1876, 1877, 1878, 1879, 1880, 1881, 1882, 1883, 1884, 1885, 1886, 1887, 1888, 1889, 1890, 1891, 1892, 1893, 1894, 1895, 1896, 1897, 1898, 1899, 1900, 1901, 1902, 1903, 1904, 1905, 1906, 1907, 1908, 1909, 1910, 1911, 1912, 1913, 1914, 1915, 1916, 1917, 1918, 1919, 1920, 1921, 1922, 1923, 1924, 1925, 1926, 1927, 1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1999 and 2000.

Out of a record, which currently runs to 165 years, 149 years definitely aren’t the warmest. To this we can add a few additional years that are distinctly unlikely to be the warmest.

1997, 2001, 2004, 2008, 2011 and 2012.

And, while we’re at it…

1998, 2002, 2003, 2005, 2006, 2007, 2009, 2010, 2013 and 2014.

Pick any one of those years and, more likely than not, it won’t be the warmest year either. Careful readers will have noticed that there is not a single year in all of those 165 years that is unaccounted for; the vast majority of years definitely aren’t the warmest, but even in the small remainder there is no year that is more likely to be the warmest year than not.

We really should have stuck with “maybe” because this is going to take a while to unpick.

Seriously, folks, consider maybe.

No? OK. This is on you.

According to a very good global temperature data set, 2014 was estimated to be 0.56° above the long term average. The uncertainty on that estimate is about 0.10°. In other words, according to that data set there’s about a 95% chance that the true global temperature will be between 0.46° and 0.66°. Likewise, we can consider 2010, with an estimated global temperature of 0.53°C and an uncertainty, again, of about 0.10°C. If these were the only two years and this was all we knew, we could calculate the probability that 2014 was warmer than 2010. It’s about 69%. We can also compare 2014 to 2005 (0.56 vs 0.52). In this case 2014 is about 75% likely to be warmer than 2005.

However, to work out the probability that 2014 is the warmest year on record, we have to compare it to all the other years at the same time. This is a slightly more involved calculation, so we’ll build up to it. First by asking what’s the probability that 2014 is warmer than both 2010 and 2005.

We’re going to do this using a Monte Carlo method. We’ll take the best estimates for 2014, 2010 and 2005 and use the uncertainties to generate possible “guesses” of what the real world might have done4. We’re going to do that thousands of times and count how often 2014 comes out on top.

The probability that 2014 is warmer than both 2010 and 2005 is about 60%, less than the probability that 2014 is warmer than either one or the other separately. If we add 1998 into the mix, then the probability drops even further, to 56%. The more years we add the lower that probability goes. Why does that happen? Simply, each year gets a crack at being warmer than 2014. The more years there are, the higher the chance that just one of them will be warmer. And one year is all it takes.

However, this process doesn’t go on indefinitely. As we move further down the list of warm years, the probability that a year is warmer than 2014 drops rapidly. Soon we get to the point that it’s so unlikely that a year was warmer than 2014 that we can drop it from our calculation and it makes no difference. The probability that 2014 is warmer than 2010, 2005, 1998 and 2013 is 50%. If we compare 2014 to the other nine of the ten warmest years the probability that it comes out on top is about 47%. If we go further down the list than that the probability doesn’t change. 47% is therefore the probability that 2014 is the warmest year on record.

If we do the same analysis for a different, but equally excellent data set, we’ll get a slightly different set of probabilities, but the basic pattern will be the same. In this case 2014 has about 39% probability of being the warmest year on record.

We can repeat these analyses focusing on other years (is 2010 the warmest? 2005? 1998?) and in each case the probability will be lower than for 2014. That was all a bit tedious, but based on this simple analysis it turns out that no year is more likely than not (greater than 50% probability) to be the warmest year on record. On the other hand, we know that one year has to be the warmest, which is, if you are so inclined, pleasingly paradoxical as questions of probability often are.

We can rephrase the question and ask which year has the highest probability of being the warmest year? The answer based on these two data sets is 2014. As one blogger (I can’t remember who) put it, no year has a better claim.

All of the above needs the rather large caveat: “based on these two data sets” and “based on this particular method”. The probabilities I calculated depend on the data set and on the method. Change either one, change the probabilities. We could look at other data sets, such as those produced by Berkeley Earth (who declared 2014 a tie with 2010 and 2005), or the ECMWF reanalysis (which had 2014 in the top 10% of years in their reanalysis, nominally third warmest). Cowtan and Way look poised to put 2014 in second place. There’s no way to rigorously combine all this information to get a single best answer to any of the questions we might want to ask, but it does underline the fact that there is uncertainty and that it is limited.

For example, there’s no data set of global surface temperature that places 2014 outside the top four years based solely on best estimates. Based on those data sets that have uncertainty estimates, it is very unlikely that 2014 is outside the top 10. It’s quite unlikely that it’s outside the top 5.

So, 2014 was a very warm year. Was it a top 10 year? Yes. A top 5 year? More likely than not. The warmest?

Maybe.

1. Unless the thought-provokingly-fine tuning of various fundamental parameters stretches as far as global-mean temperature. On earth. In the 21st century. This has not, to the best of my knowledge been previously suggested. You saw it here first, folks.

1.5. There are lots of different estimates of global temperature and, obviously, in each of those there will be a year that is warmer than any other.

2. The textbook example is the carpenter’s maxim: measure twice, cut once.

3. Usually. The exception would be if a large fraction of them recently had cause to synchronize their watches, something that Hollywood would have me believe occurs a short, and presumably well-measured, period before it all kicks off

4. To do this we assume that the distribution of errors is Gaussian – the famous bell curve – with mean equal to the best estimate and standard deviation equal to the estimated 1-sigma uncertainty. Errors are considered to be independent from year to year. This is a lot simpler than the real world is, but it will give us an intuition for what’s going on and how uncertainty interacts with rankings. This analysis is a lot simpler than NOAA used too. Consequently, the probabilities I get will be somewhat different.

[Update: 4/2/2014 corrected 2005 global temperature anomaly for NCDC’s data set. Was quoted as 0.54 now, more correctly, 0.52]

[Update: 6/2/2015 First, the date immediately above this one is wildly wrong. Second, Lucia pointed out that the mystery blogger who said no year has a better claim was, in fact, Nick Stokes. Third, Significance has reposted this here.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

175 Comments
Inline Feedbacks
View all comments
The Ghost Of Big Jim Cooley
July 13, 2015 4:16 am

Was 2014 the warmest year?
We don’t know. Some people think they know. Some people think we know all about climatology. Whenever I have had to discuss climate with someone who wanted my opinion, I say we don’t know. So we cannot possibly know if 2014 was the warmest year since records began. Anyone who says it definitely was, is a liar.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 4:23 am

I left finding out who it was until after I wrote. Now I know, the actual title was, ‘Was 2014 the warmest ever year’. Even worse. So this person’s article title is at variance with the text.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 4:44 am

Ah. The original would cost me a minimum of $6, it seems.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:05 am

Roger that, Bob.

Editor
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:50 am

I did a Google search for a string that yielded two hits – this post and one other, that did not have “ever” in the title and is free. Still haven’t found the $6 reference! I’ll share my search string, I’m rather proud of it, after Bob provides the link.
My complements to the original author for a very well written explanation understandable to a wide range of people.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:15 am

Ric, I trust others will respect Bob’s wishes and not click on it, but here is the link. It’s $6 to rent (whatever that is), $15 to see it in a cloud (again, whatever that means), and $38 to download it as a PDF (I know what that is). I Googled it too, but obviously you and Bob know where it hangs out for free. When I Googled it, I left ‘ever’ in, and evidently shouldn’t have.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:18 am

Sorry, will leave link out until Bob completes this thread.

The Ghost Of Big Jim Cooley
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 7:22 am
Editor
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 8:50 am

About my search string. I look for “big” “sciency” words that are unlikely to appear in other documents, especially where two or three occur in a phrase. In this case I lucked out with |”Non. Nein. Niet. Nopety”|. Okay, not very sciencey, but nopety would do just fine.
2 results (0.58 seconds)
Search Results
Was 2014 the warmest year? | Diagram Monkey
https://diagrammonkey.wordpress.com/…/was-2014-the-warmest-year/
Jan 31, 2015 – Non. Nein. Niet. Nopety, nopety, noooooo. The number of years in the global temperature record which definitely aren’t the warmest is quite …
A Return to the Question “Was 2014 the warmest year …
wattsupwiththat.com/…/a-return-to-the-question-wa…
Watts Up With That?
5 hours ago – Non. Nein. Niet. Nopety, nopety, noooooo. The number of years in the global temperature record which definitely aren’t the warmest is quite …

rgbatduke
Reply to  The Ghost Of Big Jim Cooley
July 13, 2015 5:24 am

There is a fundamental problem with the analysis, especially extended back to 1850. Specifically, the error estimatet for the present is around 0.1, but around is not the same as exact. Furthermore, the basis for the error estimate itself is an estimated basis — it has a number of assumptions built into it and it is not, by any stretch of the imagination, the standard deviation of a set of independent and identically distributed samples drawn from a stationary distribution. It does not have an axiomatic basis — the error estimate itself has biases in it that cannot be independently estimated because they are based on assumptions that cannot be independently tested.
To make this clear, let’s consider HadCRUT4, as it is a dataset I have on hand — including its error estimates. Here is the line for 1850:
1850 -0.376 -0.427 -0.338 -0.507 -0.246 -0.542 -0.211 -0.518 -0.239 -0.595 -0.162
The first number is the “anomaly”. I don’t want to discuss the difficulties of using an anomaly instead of an absolute estimate of global average temperature but IMO they are profound. Nevertheless, it is important to remember that this is what we are doing in the discussion above, Bob, because the uncertainty in the actual global average temperature is “around” 1 C, not 0.1 C. So when the article asserts “warmest year” what it really means is “highest anomaly” computed “independently” of the actual global average temperature which is paradoxically much less precisely known.
The last two numbers are the supposed lower and upper bounds on the temperature estimate. One has to assume that these bounds are some sort of “95% confidence” interval, but of course they are not, not really, because the error estimate is not based on iid samples and hence there is no particularly good reason to think that the central estimate is normally distributed relative to the true temperature, oops, I mean “anomaly”. It is also the case that the other entries are supposedly error estimates as well that are somehow combined into the last two numbers, and hence the uncertainty in the uncertainties is likely compounded. Nevertheless, we see that the anomaly in 1850 could have been as low as -0.595 and has high as -0.162. A bit of subtraction and we see that HadCRUT4 estimates the anomaly in 1850 to be -0.376 \pm 0.216 with approximately symmetric error estimates. 0.216 is not particularly close to 0.1 — in fact it is over twice as large.
Let’s consider the line for 2014:
2014 0.555 0.519 0.591 0.532 0.578 0.456 0.654 0.508 0.603 0.445 0.666
This line may not be current — they keep tweaking the numbers as the next global meeting to address global warming draws near — but it is what I downloaded at my last opportunity. Note that the anomaly is pretty close to 0.555 \pm 0.110. Each year comes with its very own error, and the errors vary from 0.08-ish to 0.12-ish in the 2000s and not quite twice that in the 1800s.
This is a serious problem. Error estimates for 1850 of only 0.2 C compared to contemporary error estimates of 0.1 C are simply not credible. They are in-credible. One, the other, or both are absurd. To put it bluntly, there is no way in hell that we know the global average temperature, or the global average temperature “anomaly” — almost as precisely in 1850 as we do today (where within a factor of 2 in the error estimate is absolutely “almost as precisely”. For one simple thing, a rather enormous fraction of the Earth’s surface was still terra incognita in 1850. Phenomena such as El Nino and the Pacific Hot Spot that dominate the temperature estimates for 2014 would have passed unmeasured in 1850 — El Nino itself had not yet been observed or named. Antarctica was basically totally unexplored. The North Pole — far more accessible than the South — was not reached until the 20th century, although attempts to reach it date back into the 19th. Africa, South America, much of Australia, western North America, Siberia, China, Southeast Asia, and the bulk of the Pacific and South Atlantic Ocean — rarely visited to totally unexplored, and certainly not routinely sampled with reliable equipment and methodology for temperature. Look how much NOAA changed its anomaly this year on the basis of “corrections” to SSTs measured by ships (ignoring the one source of truly good data, the ARGO buoys). Now imagine the measurements being made in wooden sailing ships by indifferent ship masters along whatever sea routes happened to be travelled in any given decade.
In my opinion the error estimates for the anomaly in the 19th century are understated by at least a factor of 3. The error estimates for the first half of the 20th century are understated by a slightly smaller but still large factor, perhaps around 2. I’m not entirely happy with error estimates of 0.1 C for contemporary measurements — not given the long list of “corrections” that have been and continue to be made that produce variations of this order and the spread in the different anomaly estimates. This might be a standard deviation (if this has any meaning in this context) but it certainly is not a 95% confidence interval, not with a spread of anomaly estimates that differ by this general order.
All of this becomes painfully obvious if one actually looks at and compares global average temperature estimates instead of anomalies. We do not know the current global average temperature within a full degree C, not at 95% confidence. The temperature record we have is sparse over much of the globe today, although with ARGO it is finally starting to become less sparse. This record has been “adjusted” to within an inch of its life, to the point where if one plots the adjustments against carbon dioxide level in the atmosphere, they are linearly correlated with $R^2 \approx 1$, which a sensible person would interpret as (literally) statistically incontrovertible evidence of substantial bias in the adjustment processes used. Because it is impossible to use it to form an accurate estimate of global temperature, it is manipulated to return an “anomaly” with respect to an arbitrary and supposedly self-consistent baseline that itself is only known to some precision.
I agree with Nick’s assertion that perhaps no year has a better claim than 2014, but I have to categorically reject the assertions of precision in the computation of probabilities. The claim of 2014 is nowhere near 40% likely to be correct. I’d be amazed if it were 5% likely to be correct.
rgb
[Well, they did pay (a lot of money) for those adjustments to the modern temperature records, but $R^2 \approx 1$ ?? 8<) .mod ]

John Peter
Reply to  rgbatduke
July 13, 2015 6:21 am

Sounds right to me. I wonder what Steinbeck would have said to the depression of the thirties temperatures in current surface temperature records.

Old'un
Reply to  rgbatduke
July 13, 2015 6:50 am

A great post. It is a global tragedy that the Climate Science community is blinded to the use of plain common sense in data analysis, by its obsession with supporting the CAGW hypothesis. Far from ‘saving the planet’, their blindness is likely to cost its inhabitants dear.

ferdberple
Reply to  rgbatduke
July 13, 2015 6:52 am

The uncertainty on that estimate is about 0.10°
==============
that was also the first thing that struck me. the uncertainty in the global average temperature is not known. if it was anything like 0.10° there would be no need for the ongoing adjustments.

Reply to  rgbatduke
July 13, 2015 7:19 am

As usual you do a great job of explaining why the Emperor’s clothes are not as regal as described, in fact they are worn and dirty tattered scraps, yet they parade him around and expect us to be in awe of the regal wear.

Harry Passfield
Reply to  rgbatduke
July 13, 2015 8:11 am

RGB: Just wanted to say how much I do enjoy reading your posts. OK….reading/reading/reading – based on my stats level… 🙂

Latitude
Reply to  rgbatduke
July 13, 2015 8:13 am

or to put it in common sense terms…
The uncertainty is also an estimate..
…which throws out the baby and the bath water

Reply to  rgbatduke
July 13, 2015 9:42 am

“The first number is the “anomaly”. I don’t want to discuss the difficulties of using an anomaly instead of an absolute estimate of global average temperature but IMO they are profound.”
We use absolute temps. Not a problem, easy peasy. your concerns about anomalies…
no so profound.
Theorizing aside, break out the keyboard and real data. demonstrate conclusively that its “profound”
it’s not.

Reply to  Steven Mosher
July 13, 2015 10:37 am

We use absolute temps

I must not understand this, as you’ve told me many times not to average absolute temps (which while I do, I don’t do anything with them, I mostly use day to day differences).
Maybe (m a y b e , yep got all the letters) you can explain this, in real sentences, if you don’t mind 🙂

Reply to  rgbatduke
July 13, 2015 9:44 am

Reading this comment by rgbatduke and the post on statistics from yesterday reminded me how a persons intuition, while not explainable and hardly ever articulate, can still sort out the BS. It is comforting to see others finding problems with much of what is being passed off as climate science. Any time rgbatduke offers up some thoughts they are a worthwhile read. And then reading all the comments from the Robust statistics post was just icing on the cake. A delicious offering for a rainy morning.

GeneDoc
Reply to  rgbatduke
July 13, 2015 10:21 am

Lovely rant rgb. Love it. Specious accuracy (and precision)! Downright delusional thinking by people who really oughta know better. It’s really a completely ridiculous task, computing (and re-computing and kriging and nudging) a global average temperature. Even ARGO is too sparsely sampled, but the powers that be need a number. Wouldn’t it be great if we had a temperature series from satellites? Oh. Wait.

PaulID
Reply to  rgbatduke
July 13, 2015 11:39 am

Thank you sir while I assume that you normally communicate with people with a higher level of education than I have your comment was worded perfectly for the common man to understand for remembering that not everyone who reads this has had a college level education it’s greatly appreciated.

Patrick B
Reply to  rgbatduke
July 13, 2015 12:56 pm

Well said. My complaint from day 1 has been the failure of the climate community to recognize proper error analysis. Without it, all the data, analysis and theories are useless. We do not know – it’s that simple.

DD More
Reply to  rgbatduke
July 13, 2015 3:55 pm

rgb “All of this becomes painfully obvious if one actually looks at and compares global average temperature estimates instead of anomalies. ”
When NCDC did try to use / compare temperature estimates they got this. And remember 1998 was the spike year.
Current – The combined average temperature over global land and ocean surfaces for May 2015 was the highest for May in the 136-year period of record, at 0.87°C (1.57°F) above the 20th century average of 14.8°C (58.6°F),
1) The Climate of 1997 – Annual Global Temperature Index “The global average temperature of 62.45 degrees Fahrenheit for 1997” = 16.92°C.
http://www.ncdc.noaa.gov/sotc/global/1997/13
(2) 2014 annual global land and ocean surfaces temperature “The annually-averaged temperature was 0.69°C (1.24°F) above the 20th century average of 13.9°C (57.0°F)= 0.69°C above 13.9°C => 0.69+13.9 = 14.59°C
http://www.ncdc.noaa.gov/sotc/global/2014/13
14.8 >> 16.92 << 14.59
Which number do you think NCDC/NOAA thinks is the record high. Failure at 3rd grade math or failure to scrub all the past. (See the ‘Ministry of Truth’ 1984).

Reply to  rgbatduke
July 13, 2015 8:05 pm

“I must not understand this, as you’ve told me many times not to average absolute temps (which while I do, I don’t do anything with them, I mostly use day to day differences).
Maybe (m a y b e , yep got all the letters) you can explain this, in real sentences, if you don’t mind :)”
Like I said.. we USE absolute temps and we produce a record that is ABSOLUTE temps.
but we dont average and neither should you.. if you want a best estimate.
So, do not average average temperatures. USE them.
The approach is simple, willis showed you one approach in his post on temperature and latitude.
In words. You have a sample of measurements at lat and lon and elevation.
The goal of spatial statistics is to predict the temperature at UNSAMPLED locations using ALL
the information you have.
What information do you have : temperature, data, latitude, longitude and elevation.
The first step is to create a model that predicts the temperature at all locations given the information you have.
T = f(lat,lon, elevation, time)
if you do that regression you will see that over 90% of the variance is explained by those variables.
That means you can predict the temperature at unsampled locations… just stick in lat lon and elevation
and the time.
This surface we call “the climate” It never changes. its deterministic. Google Koppen and you will have an idea about what we mean by climate: The temperature at a location and time of year. Think tropical climate
When you build this predictive model there will be a error between what you model says and what the data says. That’s called a residual.
That residual contains the “part” of the temperature that is “random” or changing. This is the weather.

rgbatduke
Reply to  rgbatduke
July 14, 2015 5:49 am

We use absolute temps. Not a problem, easy peasy. your concerns about anomalies…
no so profound.

Let’s just say that I am skeptical about the precision of the anomaly compared to the precision of the absolute temperature. Let me see if I can find the NASA page on this, I think I bookmarked it… here:
http://data.giss.nasa.gov/gistemp/abs_temp.html
This is, as far as I can tell, Hansen’s own apologia for the fact that the model spread of global average surface temperatures is around 1 C at the same time that the anomaly is supposedly known to order of 0.1 C. Note well that everywhere else in the science of measurement, susceptibilities/differences are known to less precision than the absolute quantities, for the simple reason that one has to add the errors when computing two things, plus the fact that one loses (relative) precision rapidly when subtracting two large quantities to make a small quantity.
This sort of thing is also a featured topic, BTW, in the book How to Lie with Statistics. Seriously. If the graph of surface temperature was presented to scale, the entire anomaly couldn’t be resolved to much better than the thickness of the line used to print the graph, as evident here:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/to:2015/offset:287/plot/hadcrut4gl/from:1850/to:2015
Considerably less “alarming”, eh? Especially if one adds a 95% confidence interval around the global absolute temperature of roughly plus or minus 1 C (if the model spread of only a handful of models is a full degree C) — the line would be over twice as thick as the total variation.
So the real question is whether or not we can reasonably believe that thermometers measure temperature differences more accurately than they measure temperatures, especially given the average of the latter over many, many thermometers. Maybe. But an order of magnitude reduction in the error — indeed, the presentation of the error at a scale vastly smaller than the observed variance in the data — not so easy to believe.
And in any event, an error of 0.2 C in 1850? Care to comment on that one? IMO that is absolutely, completely absurd. You’ve looked at the data, what do you think? Given the huge blank spaces on the map, either measuring the temperature in those spaces does not matter (much) to computation of the global average whatever, or else this assertion of error is absurdly wrong. I would value your comment on this particular observation.
rgb

RACookPE1978
Editor
Reply to  rgbatduke
July 14, 2015 6:03 am

Sobering summary of the problem.
Politics aside – although political control is the very center and the ultimate cause of the problem’s “solutions” – Hansen’s desperation to “paint the global” in red temperatures has led him to his analysis methods and his distorted Mercator maps.

Reply to  rgbatduke
July 14, 2015 6:26 am

So the real question is whether or not we can reasonably believe that thermometers measure temperature differences more accurately than they measure temperatures

What I decided to do because of this, use a single stations thermometer as the reference thermometer to calculate the change at that station. It’s absolute value is questionable, even the calibration for change is questionable, but it’s likely to most accurate method to determine change at the station, so I calculate a difference value from the min and max values that station records, so Tmnday2 – Tmnday1 = MnDiff1, Tmnday3 – Tmnday2 = MnDiff2, Tmxday2 – Tmxday1 = MxDiff1, Tmxday3 – Tmxday2 = MxDiff2, (Tmxday2 – Tmnday2)= Trise1, (Tmxday2 – Tmnday1)=Tfall1
Then I select specific stations based on samples collected by year, and by area.
I think this is a superior method than all of the other temp series. I do no infilling, no homogenizing. The results for an area isn’t a field average (like BEST provides), that requires estimating a value for the entire area, most of which is unmeasured. My results are the values the stations recorded for that area, I think this is the best that can be done with surface station data, and it tells a different story, it tells us that no matter the effect of Co2, the Planet is able to cool, in fact for the most part global averages as recored at the surface stations hasn’t really changed a lot, heat has moved around, and I think this with all of the infilling creates a warming trend that isn’t there.
This is the average day to day change, Rising temp (as a proxy for temperature), and calculated Solar Forcing at each station.comment image
Now average diff is the average of all of the day to day changes recorded, I added an annual average for each station (as opposed to by day) and the number of stations so you can see how the number of stations has changed to rgb’s point about past error.comment image
All 7,000 stations in 2014 show significant cooling (if you divide the annual value by the number of stations, you get the same daily difference value)

Reply to  rgbatduke
July 14, 2015 6:03 am

RGB: “indeed, the presentation of the error at a scale vastly smaller than the observed variance in the data — not so easy to believe.”

http://qualityamerica.com/images/ebx_-2138538623.jpg

rgbatduke
Reply to  rgbatduke
July 14, 2015 6:47 pm

Joel D. Jackson
July 14, 2015 at 6:03 am

For independent, identically distributed (iid) samples drawn from a common, stationary distribution.
Now try again, where the samples are not independent, are not identically distributed, are not stationary, and are not drawn from a common distribution. Also compare the variance year to year and model to model. Finally, define N.
rgb

Reply to  rgbatduke
July 14, 2015 7:12 pm

I’m sorry RGB. I was under the impression you understood some of the statistical underpinnings of measurement theory.
“N” is the number of observations used in estimating the population mean.

The higher the number of observations, the lower your standard error for a given sigma.

taz1999
Reply to  rgbatduke
July 15, 2015 1:45 pm

Well it’d been a crying shame to pay for the adjustments and not get some type of record. At least there is some ROI.

July 13, 2015 4:17 am

This crucial conference in Addis Ababa quietly occurring this week http://www.un.org/esa/ffd/ffd3/conference.html is the true reason for hyping 2014 temps so forcefully. We need to be watching since we are the ones on the menu and financing the whole gourmet dinner too.

July 13, 2015 4:31 am

The Surface Temperature Data Sets cited by the author are NOAA and GISS.
However, it is evident that:
NOAA .EQ. CRUD .AND. GISS .EQ. CRAP
MAYBE 1934 was the warmest year since 1850.
Earth was certainly warmer during the Medieval Warm Period and the Roman Warm Period.
There is nothing unusual about global average temperature in 2014.

Paul Westhaver
July 13, 2015 4:34 am

2014 was arguably not the warmest year.
Also, so what if it was, The earth generally has been warming since the last ice age.
Is the earth warming (maybe but seems nope) due to CO2 emissions that are a consequence of only human activity? No.
Would it be great if the earth warmed? Yes, but it isn’t.
The CAGW issue is a socialist ploy to justify a new planetary tax to move money from the rich to the UN who say they will use it for the poor. A lie to service a lie in a get rich quick scheme.

Reply to  Paul Westhaver
July 13, 2015 9:43 am

what letter in “maybe” is confusing you?

Reply to  Steven Mosher
July 13, 2015 10:44 am

That is maybe why he used the word “arguably”?

old construction worker
July 13, 2015 4:40 am

According to a few scientist we are heading into another LIA by 2030. Well, maybe. like most of us they do not have a crystal ball or all of the data needed to make that judgment. It may take another 6000 years of observations that prediction. All I know every time the best physicists model the universe, the models are not quite right. The universe keeps throwing them a curve ball.

daveandrews723
July 13, 2015 4:40 am

Unfortunately, reality does not matter in the “man-made global warming/climate change” debate. Only perception matters. And that is why this will go down as a very dark period in the history of science… perception is more important (even to the scientists) than reality.

commieBob
July 13, 2015 4:44 am

Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?
The blogger didn’t mention the satellite record. The satellite record has to be much more reliable than any of the surface station data sets.
When the surface temperature needs infilling, why don’t they use the satellite record to inform the decision? I’m not saying to use the satellite data directly but it should be possible to use it to generate a delta from a surface station.
This whole thing is garbage piled on trash piled on rubbish piled on …

Editor
Reply to  commieBob
July 13, 2015 5:58 am

I agree that the satellite record is more accurate overall than the ground record, but they measure different things. Most notable is the lag in the satellite record of developing El Ninos. Not too important when comparing a years worth of data. Also, the target audience is a much wider set of people than those who understand the multiple datasets and biases among them.
As for your original question, I’m inclined to think that gov’t funded research is aimed at providing the best bogus data for the Paris COP money can buy.

Editor
Reply to  Ric Werme
July 13, 2015 6:06 am

Better answer – my test for articles written for the general public was “Would my mother understand it?”
If commieBob incorporated multiple datasets into the essay or if rgbatduke used the greater error of older measurement, the result would be far less understandable to her.
I spent years (on and off!) figuring out how to explain what a computer software “race condition” was (Mom didn’t have a home computer) and was insufferably pleased with myself when I came up with a good analogy.

Reply to  commieBob
July 13, 2015 10:01 am

“Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?”
1. The are two entirely different Beasts.
2. The Surface record is a combination of SAT ( 2meter thermometer records) and SST
— a patische of bucket, bouys , hull intake.. etc.
3. The satellite record is a conglomeration of multiple sensors over a short time period. It is NOT
a direct measurement of temperature. temperature is DERIVED using a physics model and
SIMPLIFYING ASSUMPTIONS about the atmosphere. It is adjusted in some cases by a GCM.
4. The surface record is a non random sampling of Minimum temps and maximium temps.
5. The satillite record isnt Min and max.. The sensor has two equator crossing ( ascending and descending ) and different patches of the earth are sampled at different times. Not min and max.
6. Balloons are super sparse. they are done in very few locations.
The bottom line is that they these two records “measure” different things in different ways. They both
are constantly being revised and improved. The satellite records have changed more than other records.
That alone should tell you somethng about structural uncertainty.
Further, IF they matched that ALONE would tell you NOTHING of interest. Similarly if they dont match that ALONE will tell you nothing.
For example. Suppose you found that the satellites warmed at 1C while the surface warmed at 2C.
What does that tell you?
Nothing more than this: the two records disagree.
You might argue, for example, that this difference MEANS that the surface record is infected by UHI.
However, that’s an Inference and not a fact. That’s an explanation of the difference, however there are other explanations: Figuring out WHICH explanation is correct is not straightforward.
So
Why is nobody working on explaining the difference between the surface temperature record and the satellite and balloon records?
1. because nothing much turns on the question.
2. because any answer to the question is going to be MORE questionable than the records themselves.
3. Cause its really fricking hard work for small scientific return.
That said there is some work that has been done, and I have a bunch of work on the topic. Nothing worth publishing.

Reply to  Steven Mosher
July 13, 2015 7:18 pm

Mosher, These temperature anomalies that so worries people, are a statistical artefact calculated using dodgy data and dodgy methods. The temperature standard for us humans is 98.4F but that varies widely even in healthy people. The thing I noticed with these temperature data sets is that some of them use a different temperature start point, this is rather odd, but that said they guess the temperature of the world and work from there.
This means that the anomalies mean nothing and the average start temperature could be seriously wrong, taking into account the proven warm periods in our history maybe we should add a degree or two to the start temperature, this would mean we are running a serious deficit. I might suggest that we are about a degree colder than what is conducive to a happy world.
Believe nothing of what you hear, nothing of what you read and only half of what you see, in this way over time truth can filter in to the grey matter.

rgbatduke
Reply to  Steven Mosher
July 14, 2015 6:55 pm

All pretty reasonable statements. OTOH, it is worrisome if the satellite record and surface temperature record systematically diverge, as this seems as though it would violate — eventually various principles of physics. Indeed, it is worrisome that the existing, growing divergence is strictly in the direction of relative warming of the surface record.
Don’t confuse a random variation of one over and under the other — fluctuating around a common trend — from a systematic, growing deviation, as they are not the same beast, and the “scientific return” isn’t necessarily small for explaining it. Especially given the truly excellent correlation between CO_2 increases, the adjustments in the surface temperature record, and the deviation. One might well be forgiven for seeing the triple play here is pretty much certain evidence of bias in the corrections.
Or not. In terms of p-values, though, or confidence intervals (take your choice) the null hypothesis that both are accurate unbiased representations is pretty unlikely.
rgb

Reply to  rgbatduke
July 14, 2015 7:06 pm

I forgot about this, and iirc when I posted this before it might have been Mosh who said it didn’t mean much.
But i thought this looked a lot like the satellite temperature seriescomment image
Now it isn’t going to be exact, it’s land based only, but if people agree it is a decent match for satellite, I answered Mosh’s request on why they’re different, it’s the published surface records processing, which I’ve been saying for a while now 🙂

David A
July 13, 2015 4:57 am

It is not quite right to ask if 2014 was the warmest year “ever”, and then say the very questionable surface adjustments are out of the equation, as well as the record divergence from the satellite data sets. (not a fair fight at all)

Reply to  Bob Tisdale
July 13, 2015 6:52 am

I applaud your (and the other contributors here at WUWT) work.
I do hope that you understand that it is a one way conversation. Most of the people here know there is no man made warming threatening our survival. there is a cabal of elitists who are threatening our survival and their minions are ideological. Even if you could get them to listen they would not hear what you are saying.
This battle has been going on for some time and where the truth needs to be spoken is in the grade schools and high schools of the west. The ideologues have them (the youth) now and if the battle isn’t brought there science will lose.
http://thefederalist.com/2015/07/06/the-new-totalitarians-are-here/#.VZ7vinx34HA.mailto
the above link illustrates what common sense is up against. It isn’t only the weather they try to control, they want, they insist that you fall in line or else they will attack you.
FACTS mean nothing to them.

David A
Reply to  Bob Tisdale
July 13, 2015 7:31 am

I do understand. However my objection is to the question the discussion purports to answer; was 2014 the warmest year ever? And IMV, that question can not be answered by limiting the discussion to surface data sets, and ignoring UHI adjustments etc, etc. The question may be more accurately phrased as, within the error margins of the surface data sets, and blindly accepting all the adjustments as valid, was 2014 the warmest year ever? (The answer is informative about the scientific process and error margins, but it does not answer the original question.)
The surface and the satellites have historically followed a parallel pattern within a certain spread.
The continuously growing divergence between them, as well as numerous other factors about the surface data sets, such as the declining number of stations in the data base being used, meaning greater homogenization as one example, all lead to the divergence as evidence of increasing error margins within the surface data.
I do appreciate the focus on some aspect of error margins, but I do object to it in anyway answering the question, “was 2014 the warmest year ever?”
Best regards
David A

David A
Reply to  Bob Tisdale
July 13, 2015 7:36 am

Correction to above post.Yes, I know the question is, Was 2014 the warmest year ON RECORD, not “ever. Perhaps I was just influenced by how the media spins it, but my message remains valid.

gbaikie
Reply to  Bob Tisdale
July 13, 2015 11:15 am

When you count that Urban Heat Island Effect increases the average air temperature of local region significantly and when consider more than 1/2 of humans live in region where there is this significant warming from the UHI effect.
Can it be said that for most human beings, the air outside their homes has been the highest air average temperature in year 2014 since the time human discovered the use of fire?
One problem is the humans have moved out of tropical region into cooler regions. Or today most humans live in the Temperate Zone, rather than the Tropical Zone, and the Tropical Zone has a more significantly higher average temperature than Tropical Zone.
Also I don’t know if UHI effect has much effect upon the average temperature in the Tropics as it does in the Temperate Zone.
So maybe one could say, for people living in the Temperate Zone, 2014 or the 21 Century has had been the warmest years.
And in terms of the the Earth’s oceans, which are more related to Earth’s average temperature, rather average temperature human’s experience. since the time of the Little Ice Age which ended around 1850 AD, the ocean temperatures have warming, are currently the warmest they have been for a couple centuries. Or sea Levels have risen, and large part of this rise in Ocean temperatures is due to the thermal expansion. The Human species which began in Africa, has lived through many periods when the ocean average temperature has been quite a bit warmer than today’s ocean. Such as last interglacial, the Eemian period. Or for animals which don’t live in urban areas, the world over last few million years, on average, has been cooling. With the up tick since the end of the Little Ice Age, as a moderately warmer period.

July 13, 2015 5:03 am

it was not the warmest not even close judging by my pepper and tomato crop. and this year looks like it is on track to be below average.

Glenn999
July 13, 2015 5:09 am

Wouldn’t it make more sense to look at smaller geographical areas? Perhaps there are some areas that are the “warmest ever”. But averaging hot spots with cold spots to determine a global number seems to me to degrade the significance of the data. No?
Thanks Bob.

noaaprogrammer
Reply to  Glenn999
July 13, 2015 11:20 am

Also, one could ask: “What is the warmest stretch of 365 days?”
…and using strategically placed instruments: “What is the largest value when integrating a year’s worth of temperature over time?”

Reply to  noaaprogrammer
July 13, 2015 11:34 am

…and using strategically placed instruments: “What is the largest value when integrating a year’s worth of temperature over time?”

Just in case you missed itcomment image
I integrate the day to day change in temp for stations with a minimum of 360 days of samples per year, and then average them. I haven’t uploaded the latest reports or code so I can’t point you to the sample size, but let me provide the 1940-2014 numbers.
Min temp average -0.00413011F
Max temp average 0.001059264F
Now this is great, here’s the Sums
Min temp -0.309758281F
Max temp 0.079444773F
I left the extra digits of precision to allow the reader to round to their own preferences, NCDC claims the measurements are +/-0.1F
Those sums are from 72992776 daily samples, so the sum of 73 million day to day temperature readings since 1940 is a fraction of a degree F, so you could argue that there’s less than 0.1F increase in max temps since 1940, and min temps have dropped -0.3F.
The temp series that are published and have more warming than this are made up!
And this is with their adjusted data.
IMO this, using their own data proves there is no global warming period.

Reply to  noaaprogrammer
July 13, 2015 11:40 am

There are 15,313 unique stations.

Reply to  noaaprogrammer
July 13, 2015 11:52 am

Let me make a correction on the sum’s, they are the sums of the annual sums. Not the sum of all 73 million samples.
Okay
This is the sum of all min temps -455994.3F
Sum of all max temps 11656.9F
These values are the sums listed above divided by the number of samples.
Min temp -0.006247115F
Max temp 0.000159699F

Ivor Ward
July 13, 2015 5:13 am

……Peak waffle.
( commieBob
July 13, 2015 at 4:44 am )

July 13, 2015 5:39 am

Yes, this is a good explanation of basic, fundamental statistical analysis that is (or should be) the foundation for any advanced-level course of study. What irks me is the fact that it is a fundamental idea that is often bypassed or ignored in the popular presentation of ideas in almost every post-hoc field of study (eg. climate science, economics, nutrition, medicine, psychology). I suppose it’s boring and difficult to explain and doesn’t garner click-throughs or ad sales but its absence changes the meaning of an explanatory article.
Really, we should be teaching the idea of uncertainty (in an appropriate form) from an early age and reinforce it again and again throughout primary education. We can’t force people to understand but we can do better to make (most, ie ±5% 19 times out of 20) people more aware of the reality of what we can know from empirical observation.

July 13, 2015 6:26 am

Think about the question we are asking and why.
The question: Was 2014 the warmest year on record?
Why ask it: Because it would indicate the warming that is predicted by those scared of AGW.
And if we refine that question to look at the thing we really want to know we get a new question:
New question: Was 2014 indicative of exceptional warmth and the feared hockeystick?
Answer: No, definitely not. We can’t tell if was the warmest or not but we can definitely tell that we can’t.
So it can’t be too far out there.

dmh
Reply to  M Courtney
July 13, 2015 10:08 am

I’m amused by the whole charade.
We live in a world where:
DAILY temperature ranges can be 20 deg C
ANNUAL temperature ranges can be 80 deg C
GEOGRAPHICAL temperature ranges (pole to equator) can be 120 deg C
So debating if any given year is the “hottest” by a less than 1/10th of one deg C is akin to choosing a hay stack at random and calculating that chances that there is a needle in it. It doesn’t take the explanation in the article, or rgb’s explanation (which as he noted, is predicated on the erroneous assumption that an average can be calculated at all) to figure out the ugly truth. If the change is so small that you have to debate its existence at all, it is a pretty safe bet that it just doesn’t matter.

Reply to  dmh
July 13, 2015 10:38 am

DAILY temperature ranges can be 20 deg C

The average of a large number of surface stations is ~18F

Richard G
Reply to  dmh
July 13, 2015 2:20 pm

In Eastern California, daily ranges for different stations separated by less than 200 miles are commonly 60f-80f. For individual stations it can be 30f-50f.

Alx
July 13, 2015 6:27 am

…and the unmeasurable, might-as-well-be-mythical, actual global temperature

Nice to see the term “mythical” used in relation to global temperature. How else could you describe such an elusive vaguely defined entity?

David Chappell
Reply to  Alx
July 13, 2015 7:36 am

I like to think of global temperature and and its anomalies as the climate equivalent of the square root of minus one – an imaginary number – but without its equivalent usefulness in mathematics

JimB
July 13, 2015 6:29 am

It seems to me that in an interglacial period, such as the one we are now experiencing, the trend would be for each year to be warmer than the last. Until the trend is broken and we are once again heading for an ice age. Is this naive thinking?

MikeB
Reply to  JimB
July 13, 2015 6:55 am

Temperatures in an interglacial tend to rise relatively quickly, then to fall back slowly to glacial conditions. This is the normal pattern.
The temperature peak in the current interglacial was attained about 8000 years ago and is known as the ‘Holocene Climatic Optimum’.
“…data indicate an extended period in the early to mid-Holocene when Scandinavian summer temperatures were 1.5 to 2 ºC higher than at present…… the mean July temperature along the northern coastline of Russia may have been 2.5 to 7.0 ºC warmer than present”
Since that time temperatures have tended to decline.comment image

gbaikie
Reply to  JimB
July 13, 2015 11:45 am

Perhaps in terms of ocean temperatures.
Or in terms of surface temperatures, it seems the average temperature have been falling for most of our interglacial period [8000 years]. Or the Holocene Maximum had significantly higher average surface temperatures. Wiki:
“The Holocene Climate Optimum warm event consisted of increases of up to 4 °C near the North Pole (in one study, winter warming of 3 to 9 °C and summer of 2 to 6 °C in northern central Siberia).[1] The northwest of Europe experienced warming, while there was cooling in the south.”
https://en.wikipedia.org/wiki/Holocene_climatic_optimum
It should be noted regarding wiki quote, that tropics don’t change much even during a glacial period- when average global temperature is 10 degrees cooler.
And glacial periods are *mostly* about the northern hemisphere [where most of World’s land mass is and where most humans live]

John Peter
July 13, 2015 6:31 am

Despite the disclaimer in the blog post I don’t like the setting aside of the “Mann made adjustments” to the various surface records. I cannot wait for Senator Inhofe to start his senate enquiry into the adjustments made by NOAA and GISS. Tony Heller has been posting on these now for years, but his blog posts are individually short and disjointed. WUWT should arrange for a proper analysis to be carried out in conjunction with The Global Warming Policy Foundation. In my opinion, the “homogenization” of global temperature records by NOAA/GISS/HADCRUT is the key issue in the settling the amount of Global Warming occurring- if any.

Reply to  John Peter
July 13, 2015 9:34 am

There was a serios paper based on a Greek Ph.D thesis presented at the 2012 European AGU. It looked at all (163) long running (100 years) fairly complete (no more than 10% missing data) global GHCN stations. No geographic bias; US was deliberately undersampled to get fairly uniform global coverage. It compared raw to homogenized and proved without any statistical doubt that there is a warming bias in the NOAA GHCN homogenization. Assuming the sample is representative of the GHCN whole, then about 1/3 (~0.25C) of the 1900 tp present warming is artificially induced by homogenization. Essay When Data Isn’t gives some examples from the paper.
Both the 2012 presentation and essay were one of two separate submissions to the GWPF initiative. The other looked at GISS homogenization for all CRN 1 USCHN stations in yhe surfacestations.org archive. Reached the same qualitative conclusion. 9 of 10 ideally sited suburban/rural stations were warmed by GISS homogenization. A root problem is the regional expectation basis of the PHA algorithm, which does not distinguish between well and poorly sited stations.

July 13, 2015 6:35 am

Let’s see, if I got this right, since 1850, 2014 is the year with the highest probability of being the warmest year but that probability is just 47% so it is more probable (53%) that it is not the warmest year. What a good bet, whichever side you take you win . . . just be careful to not be too specific about what you mean by ‘warmest year’ then defend your side of the bet and take the money.

MikeB
Reply to  John G.
July 13, 2015 7:00 am

If I toss a coin 1000 times the most likely outcome is that it will come down heads 500 times, but it is much more likely to be some other number.

Michael 2
Reply to  MikeB
July 13, 2015 10:22 am

Indeed — you can calculate a probability for each number of times the coin comes up “heads” in 1000 tosses. It probably forms a Gaussian distribution. Thus it is most likely to be 500 times, slightly less likely to be 499 or 501, extremely unlikely to be 1 or 999. But when you compare the liklihood of it being 500 to any of the other 999 choices, you have a weighted sum of all the other choices that vastly exceeds the liklihood of being exactly 500.

itocalc
Reply to  MikeB
July 13, 2015 11:59 am

For what it is worth, the binomial distribution is Gaussian as the number of trials increases to infinity. The probability of getting exactly 500 heads out of 1000 coin flips is 2.5225%. The probability of getting 500 or fewer heads is 51.2613%

mobihci
July 13, 2015 7:11 am

this post is like saying, please ignore the elephant in the room (satellite) and try to work out what the floor would look like under it. anyone that claims giss is an “excellent data set” has been drinking the koolaid, no doubt about it. what is the point of playing into these peoples fantasies?!
was 2014 the warmest? no.

The Ghost Of Big Jim Cooley
Reply to  mobihci
July 13, 2015 7:21 am

Indeed, there isn’t an excellent dataset. If there was, there would only be an argument about whether man is causing it or not 🙂

July 13, 2015 7:30 am

My comment is why are we trying to figure out if 2014 was the warmest year based on manipulated data?
It is a waste of time and effort.

Ed Zuiderwijk
July 13, 2015 7:35 am

If 2014 had been made by Karlsberg it probably would be ……..

Charlie
July 13, 2015 7:37 am

What are the chances of 1998 being the warmest year in the last 100? is there any chance that the warmest year was in the 1930’s? or is that blasphemy to suggest at this point? I don’t know what to make of the warmest year claims in the 21st century considering all the data alterations leading up to this century and then of course including the ones in this century. I would have to assume all of those to be sound adjustments. Even if I do I’m not sure what a very slightly warmer year that could be the warmest means in correspondence to co2 considering climatic observation going back to the little ice age and the warming thereafter. I realize these claims are for the media. That is another reason I’m very skeptical.

tomwys1
July 13, 2015 8:16 am

The question NOT raised is: “Was 2014 a dangerously warm year?”

knr
July 13, 2015 8:43 am

‘We have to estimate it from, sparse and, occasionally unreliable measurements. ‘
And on this quicksand they built a castle of ‘settled science’ , now that is amazing .

BruceC
Reply to  knr
July 13, 2015 8:36 pm

LOL … sorry, couldn’t resist.

Matt G
July 13, 2015 8:53 am

What ever year becomes the warmest with the surface temperature data means very little with such frequent data adjustments during the decades. All they do is change the method of data collecting when a period shows no warming, to try a get what little warming there is from this change in future. The fact is the rate of warming is so little even with a record warm year in future that is falsifies the original scare of global warming. The only data sets that real scientists should take notice are of the satellite and balloon data.The surface data are changed so much that the previous decades have not been comparable for ages.
The main difference in world temperature between the 1930’s and now for example is not down to the planet being any noticeably warmer, but human adjustments consistently increasingly make it look like there is a difference. The year 1998 has been frequently changed over the years with surface data sets, slowly increasingly making it cooler. If the data was no good in 1998 and it needed frequent future adjustments, then the data is also no good now. Even the almost coldest months ever recorded in recent years for the UK, the Met Office make it look like a touch below average for the climate data.

Scott
July 13, 2015 8:53 am

Is this entire discussion moot?
The satellite record does NOT put 2014 as near the warmest year at all since its inception in 1979.
The UAH and RSS records are arguably far more accurate than the land data for many reasons, beyond the scope of this comment.
Are the terrestrial temperature records so manipulated as to be of any value whatsoever?
If the research which may come out in the next year shows that data to be manipulated in such a way as to intentionally warm the recent past and cool the intermiadiate past (Early to mid 20th century – as many suspect), the entire discussion becomes meaningless – moot.
The real and burning question is – what about that data? (land based).
That’s the real “hot” question.
I for one am certainly looking forward to see the GWPF reports.
Evidence now seems to strongly exist that Australian data, South American and North American data have been consistently “massaged” to suit the needs of the alarmists?
We shall see……

1 2 3