How Bad is HadCRUT4 Data?

Guest Essay by Renee Hannon

Introduction

This post is a coarse screening assessment of HadCRUT4 global temperature anomalies to determine the impact, if any, of data quality and data coverage. There has been much discussion on WUWT about the quality of the Hadley temperature anomaly dataset since McLean’s Audit of the HadCRUT4 Global Temperature publication which is paywalled. I purchased a copy to see what all the hub-bub was about, and it is well worth the $8 in my view. Anthony Watts’ review of McLean’s findings and executive summary can be found here.

A key chart for critical study is McLean’s Figure 4.11 in his report. McLean suggests that HadCRUT4 data prior to 1950 is unreliable due to inadequate global coverage and high month-to-month temperature variability. For this post, I subdivided McLean’s findings into three groups shown with added shading: Good data which covers the years post-1950. During this period global data coverage is excellent at greater than 75% and month-to-month temperature variation is low. Questionable data occurs from 1880 to 1950. During this period global data coverage ranged from 40% to 70% with higher monthly temperature variations. Poor data is pre-1880 when global coverage ranged from 14 to 25% with extreme monthly temperature variations.

An obvious question is how does data coverage and data quality impact the technical evaluation and interpretation of HadCRUT4 global temperature anomalies?

Detrending Methods

The monthly HadCRUT4 global temperature anomaly dataset, referred to as temperature hereafter, was detrended to compare temperature trends, deviations and the impact of noise. The focus was on interannual temperature anomalies. Therefore, it is important to remove underlying longer-term trends from the data.

A common method to detrend temperature data uses a slope of linear regression over a period of time. The linear trend from 1850 to 2017 is used to detrend the HadCRUT4 temperature data shown in Figure 1a. A simple linear regression through the entire dataset leaves a secondary trend in the data. As seen in Figure 1a, the underlying longer-term signal is not completely removed, and the remaining data is not flattened. Several linear regression slopes are required to completely remove secondary trends.

Figure 1: Comparison of commonly used detrending methods on temperature datasets. Gray boxes show statistics associated with each method. a) HadCrut4 annual monthly temperature anomalies detrended using a single linear regression slope. b) HadCrut4 annual monthly temperature anomalies detrended using a 21-year running average. Several temperature anomalies greater than 2 σ are noted on the graph.

Another method used here is a centered running average of 21 years to detrend the temperature dataset. Since averaging degrades the tail end of the data, the last 10 years used a simple linear regression to extend past the running average. The running average with linear tail is subtracted from the HadCRUT4 temperature anomaly data. This removes the influence of underlying longer-term trends and the remaining data appears flattened. As shown in Figure 1b, temperature data detrended using this method produces an average temperature close to zero indicating a flattened trend. It is now easier to compare temperature spikes, amplitude of key events, or changes in the baseline noise.

Figure 1 enables comparison of the two detrending methods. Temperature standard deviations and ranges between the two methods are different. The detrended data using the running average has a smaller standard deviation and narrower temperature range of 0.4 degrees C versus 0.7 degrees C for the linear detrended data. This narrower range is more indicative of short-term climate variability compared to the linear detrended data which is a combination of both short-term and underlying longer-term trends.

Temperature Spike Frequency

At irregular intervals on the detrended data, a temperature spike lasting approximately 1-2 years emerges through the background noise such as in 1878, 1976, 1998, and 2016 to name a few. These warm and cold temperature spikes are attributed to El Niño and La Niña conditions of the El Niño-Southern Oscillation (ENSO) which effect temperature, wind and precipitation (Trenberth, et. al.). Figure 2 highlights the temperature spikes which exceed two standard deviations from the zero baseline.

Figure 2: HadCRUT4 detrended by running average. Red and blue dots show temperature anomaly spikes greater than 2 σ from the zero baseline.

Post-1950 there are only two cold and two warm spikes that exceed 2σ. The 1998 and 2016 warm spikes are associated with well documented El Nino conditions (Trenberth). The period from 1950 to 1998 is unusually devoid of warm temperature spikes. During this interval, there were several large volcanic eruptions at Mount Pinatubo in 1991, El Chichon in 1982 and Mount Agung in 1963 with continued weaker eruptions. Volcanoes produce sulfate aerosols that slow warming by reflecting incoming solar radiation and can cause global surface temperatures to decrease. Studies suggest that volcanic eruptions impact global temperatures about 1 year after the eruption and for approximate 2-3 years depending on the size of the eruptions.

The 1970’s were years of record cold temperatures in parts of North America. These years coincide with the ice age scare and the Cold Wave of 1977 when, in the U.S., the Ohio River froze solid for the first time since 1918 and snow fell in Miami with record low temperatures of less than 28 degrees F. La Nina conditions around 1956 resulted in a European cold wave. The years 1976/77 are commonly recognized as an abrupt climate shift from cooler to warmer global temperatures (Giese, et. al). Post-1976 there is a lack of cold temperature spikes.

In contrast, pre-1950 there are double the number of warm and cold temperature spikes compared to post-1950, especially between 1880 and 1920. One hypothesis is some warm and cold temperature spikes are the result of the increased noise in the data due to sparse data coverage as described by McLean.

Other interpretations suggest the temperature spikes are related to increased frequency of El Nino and La Nina events. For example, the period from 1900 to 1940 has been described as being dominated by the Era of El Ninos based on the Nino3.4 index summarized by Donald Rapp. However, during this period there were also three temperature cold spikes exceeding 2 𝜎 and several warm temperature spikes immediately preceding the referenced Era of El Ninos. Additional study is warranted to determine if some of the warm and cold temperature spikes are caused by noisy data or are the result of true El Nino and La Nina episodes. It is difficult to distinguish background noise from anomalous spikes during this period.

Temperature Spike Amplitudes

The amplitude of warm and cold temperature spikes on detrended data were compared from 1860 to 2017 and are shown in Figure 3. The 1878 warm temperature spike is the most outstanding anomaly in the past 160 years. This warm spike is an order of magnitude greater than any other temperature excursion from a zero baseline and occurs in McLean’s sparse data coverage zone with the highest amount of noisy data. The 1878 temperature warm spike may indeed be related to an El Nino episode. Several authors have reported large-scale drought in northern China and famine in Southern India during this time. However, the 1878 temperature could be over-estimated because of poor data coverage and quality. This outlier appears to be largely driven by the Northern Hemisphere global temperature record.

Figure 3: Comparison of warm and cold global surface temperature spikes from 1860 to 2017. Data is the detrended data shown in figure 1b using a running average. Spikes were aligned close to maximum departure. Note temperature scale on warm spikes is zoomed out due to the 1878 anomaly. Warm average temperature maximum does not include 1878.

Except for 1878, the other warm temperature spikes appear very similar and all peak just slightly above 0.2 degrees C. This casts additional suspicion upon the accuracy of the 1878 warm spike. Further, the cold spikes do not show any unusual trends. The 1976 cold spike is the coldest over the past 160 years, but not significantly so. Also, note both warm and cold spikes show nearly equal temperature departures of +0.23 and -0.22 degrees C from a zero baseline, respectively. Again, 1878 is an exception.

Observations

HadCRUT4 global temperature anomalies were assessed to determine the impact of varying data coverage and noise as described by McLean. It is recognized that sparser data coverage and increased noise have influenced interannual HadCRUT4 temperature anomalies pre-1950.

Interannual temperature spikes, both warm and cold, increase in frequency during the Questionable Data timeframe pre-1950. While several of these spikes may be associated with El Nino and La Nina events, others may be the result of increased noise within the data. This period of questionable data, especially the cluster of warm and cold spikes from 1880 to 1920, warrants further study.

The 1878 warm temperature spike is the most significant interannual temperature anomaly over the past 160 years. The temperature amplitude is almost double in magnitude over all other temperature amplitude spikes. This recorded temperature outlier is most likely erroneously high due to sparse global coverage and increased data noise.

Warm and cold spike maximums do not demonstrate any obvious increasing or decreasing trends from 1880 to present day, except for 1878.  The recent 1998 and 2016 warm spikes are within one standard deviation of past warm spikes.

The HadCRUT4 global data pre-1950 may contain useful information but should be used with caution with an understanding that increases in noise due to poor data sampling may create false or enhanced temperature anomalies.

A future post will evaluate the data quality impact on the underlying HadCRUT4 decadal and multi-decadal trends.

Acknowledgements: Special thanks to Andy May and Donald Ince for reviewing and editing this article.

0 0 votes
Article Rating
110 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
October 29, 2018 2:45 pm

The first graph illustrates my ongoing gripe about historic data.

Cabin boys chucking buckets over the sides of ships and tea boys sent out into the snow to Stevenson screens to record data they don’t understand and don’t care about is unreliable ate best.

Including this crap data in anything involving tenth’s of a degree temperature shifts, along with paleo techniques and then satellite and Argo buoy data isn’t just bad science, it’s criminally irresponsible when the lives of human beings are determined on it.

This isn’t science, it’s guesswork.

Percy Jackson
Reply to  HotScot
October 29, 2018 3:54 pm

So what is the alternative? Researchers can only work with the data that exists. And surely some data
is better than none? Or are you just going to give up and declare it is all useless?

Reply to  Percy Jackson
October 29, 2018 4:09 pm

4, 9, 1, 3, 9, 9, 7, 4, 1 Here is some data for you. Now please enlighten me how some radom or false data can be better than no data.

Yes, false knowledge can be worse than ignorance, because it allows you to use this: https://en.wikipedia.org/wiki/Principle_of_explosion

Percy Jackson
Reply to  Adrian
October 29, 2018 10:05 pm

Adrian,
No one is suggesting using false or random data but rather data that comes with increasingly
large error bars as you go further back in time. As long as you know what the uncertainity is with the data you are using and you make that clear there is no issue with using it. After-all I don’t recall anyone complaining about the use of proxys by Dr. Svalgaard when trying to work out sunspot numbers 9000 years ago. Yet I would be surprised if that number had smaller errors than the best estimates of the global temperature from 100 years ago.

Jaap Titulaer
Reply to  Percy Jackson
October 30, 2018 1:39 am

>> No one is suggesting using false or random data but rather data that comes with increasingly large error bars as you go further back in time.

Yet they also do not do that.
They simply ignore any error bars and any variance.

Instead they ‘correct’ older data for Urban Heat Island effects where no cities (or suburban area’s close to cities) did exist at the time. Cooling the past, remember?

And they do not merely replicate the meager actual measurements across a large area, before taking their averages, they just use them as weak indicators in purely model based (assumed) temperatures to fill in the map.
Those models are CO2 driven models, which are also used today to fill-in empty area’s and to ‘correct’ for the many station moves. This explains why you see NO relationship between CO2 and actually measured temperatures, but you do see a correlation after these adjustments. More than 90% of the effect (of the correlation) seems to be added in this way.

One could use an old measurement for place X to determine the area around it by looking at measurements from today at X and the same area around it. But that is NOT what is done. They use a global model instead.

One could adjust original measurements for instrument changes, at the stations that underwent this change and at the the exact time that this change occurred, but that is NOT what is done. Instead they use model based assumptions to detect potential instrument changes and then use models to ‘correct’ them. They do not even check whether data suppliers have already corrected their data for instrument changes.

I’m OK with using data sets that are sometimes sparse in space or time, but you should not use that as an excuse to fill in the blanks with data that is derived from the very same model that is supposed to be ‘proven’ by the resulting adjusted data set…

Phoenix44
Reply to  Percy Jackson
October 30, 2018 2:06 am

The error bars are so large as to change the sign. So useless data.

chemamn
Reply to  Percy Jackson
October 31, 2018 11:44 am

How do you even know the uncertainty of the data?

Mack
Reply to  Percy Jackson
October 29, 2018 4:43 pm

“Surely some data is better then none?” Isn’t that the Michael Mann defence? Unfortunately, it’s difficult to determine what part of the old data is ‘good’ and what is ‘bad’ and, therefore, no definitive conclusions can be drawn from it, certainly nothing that would lead to any sane individual wanting to kneecap modern civilisation on the back of it.

BillP
Reply to  Mack
October 30, 2018 12:30 am

Michael Mann’s approach is worse than that, he selects the data that fits his theories and rejects the rest. This is most obvious for tree ring data.

Reply to  BillP
October 30, 2018 6:13 am

Of course. The data that fits his theories is good data, and the data which doesn’t is bad data. Why do you think he would use bad data? Is that what you would do? Of course you would reject bad data that might refute your theory.

sycomputing
Reply to  Percy Jackson
October 29, 2018 4:47 pm

Data about a system means nothing if you haven’t a proper understanding of the system:

https://www.history.com/news/a-brief-history-of-bloodletting

Hence, no, it is not the case that some data is better than none in such cases. In fact, applying your theory can have adverse consequences:

http://blog.yalebooks.com/2015/02/28/bloodletting-and-the-death-of-george-washington-relevance-to-cancer-patients-today/

MarkW
Reply to  Percy Jackson
October 29, 2018 4:55 pm

The alternative is to wait until you have good data.

Bad data invalidates any conclusions drawn from. No matter how much these guys want to believe, the data does not support their beliefs.

Reply to  Percy Jackson
October 29, 2018 5:38 pm

It’s an iterative process. McLean and others have pointed out large periods of questionable data. So, now there are specific examples of outlier data points, such as 1878. Perhaps, data quality experts can begin to properly filter the data and/or correct past data collection oddities.

Reply to  Renee
October 29, 2018 5:49 pm

Spot on.

The data “are what they are”… However, the resolution and margin of error have to be respected… It’s highly analogous to processing seismic reflection data… And the Climatariat have routinely violated just about every principle of signal processing.

Komrade Kuma
Reply to  Percy Jackson
October 29, 2018 7:44 pm

The alternative to the CAGW alarmism we see to day is to properly weight the older historical data for its clearly large error band and propensity to understate temperatures hence manufacturing an uptrend going forward in time. Add to the simple uncertainty, heat island effects going forward and deliberate biases ( eg rounding up or down to suit the local circumstances) and all you have is data that might be useful to establish a global mean temperature +/- a statistical margin but hardly suited to estimating the rate of change over time because the latter value is well within the data uncertainty. To then double/treble/quadruple down by extrapolating to a theory of catastrophic warming is sheer speculative bubble stuff. It is irresponsible and unscientific.

Another aspect of this is that if there are long period natural cycles reflected in the data then it matters if you start and finish on a trough/crest or vice versa as that alonf will confect a trend. You can achieve that with data that confiorms to a pure sinusoid. The holes in the logic of using the HADCRUT and other similar records as references for future projections is completely and utterly nuts.

Gary Pearse
Reply to  Percy Jackson
October 29, 2018 8:10 pm

Percy what can you be thinking? Is it that you dont want to waste it, even if it hasnt any meaning? You do understand that the problem is that in the distant past, only Europeans took temperatures and so we have 90% of the thermometers in northern Europe, vast tracts on several entire continents unmeasured and then sprinklings of added locations over the period of a century and each year we are taking their average! Now what do you say we should do?

The only thing sensible is to use the few longest standing records to get an idea of how temperatures have behaved over time for those localities. If we are all going to fry, even one thermometer say, in UK is probably going to eventually start to make heat records a bit more frequently going forward. That we don’t have a global temperature record is what it is! We should have started on deploying modern self recording stations when we had the 1930s hot period that caused so much worry, then we would have properly metered the 50s to 70s ice age cometh scare that had scientists in a tizzy. Then we could have a good picture of the warming up to 1998 followed by the 2- decade Pause and exactly how all these variations were relative to each other instead of filling it all with uncertain fudgings that continue to this day. Do you know we are making forecasts for 2100 with confidence when we still dont know what 1950 temperature WILL BE (thanks to Mark Steyn’s remark at the Senate climate hearing a year ago.)

LdB
Reply to  Percy Jackson
October 29, 2018 11:26 pm

Percy the more important part is the countries of the world aren’t even remotely in the running to reach what is required for emissions control even if you buy that is the problem. You would be better off spending monies and time on more data and more proper hard research than expending political and real capital on a dead cause.

More monitoring and more involvement of actual physicists and engineers would be my recommendation. You have enough of the general biological and geology sciences what you need is specific help to answer very fundemental questions and they need to be tasked to those other fields.

Phoenix44
Reply to  Percy Jackson
October 30, 2018 2:05 am

No, wrong data in s worse than no data.

I give you a scan. It says you have cancer. I treat you and you suffer greatly and the treatment shortens your lifespan. Oops, data was wrong. You then die from something I could have treated but had stopped looking for.

Wrong data is wrong. It sends you in the wrong direction.

angech
Reply to  HotScot
October 29, 2018 4:27 pm

The guesswork comes in taking such data as we have and then deliberately adjusting it down, Zeke and Mosher.

Old England
Reply to  HotScot
October 29, 2018 5:11 pm

They weren’t recorded in tenths of a degree, but modern claims of ‘unprecedented’ warming are based on temperature ‘changes’ that are too small to be recorded by instruments.

The artifice of ‘Climate change / AGW’ is simply that, an artifice created for the sole purpose of achieving a political outcome. The tragedy is that the outcome which is sought is an anti-democratic marxist-socialist unelected and unaccountable global government.

Paramenter
Reply to  Old England
October 30, 2018 2:41 pm

They weren’t recorded in tenths of a degree, but modern claims of ‘unprecedented’ warming are based on temperature ‘changes’ that are too small to be recorded by instruments.

That is called ‘beauty of averaging’. How set inaccurate measurements with large uncertainties can still detect trends smaller than resolution of instruments is the recurring question here. The simply answer is: average and anomalies, babe. Averaging and constructing series of ‘anomalies’ supposedly can detect small trends over inaccurate measurements.

Paul Penrose
Reply to  Paramenter
October 30, 2018 3:08 pm

Hopefully you are being sarcastic, because the best that averaging can do is improve the precision by removing normally distributed noise. It can’t improve the accuracy of the data. Hopefully that was your point.

Paramenter
Reply to  Paul Penrose
October 31, 2018 6:19 am

Hey Paul,

I’m curious rather than sarcastic. And indeed, my point was how averaging techniques can improve quality of underlying data with large uncertainties. It looks like in the presented temperature anomalies uncertainties simply vanish by the action of the magic wand. And they vanish supposedly because: measurements can have large uncertainties but we are able to still detect tiny variations in trends nevertheless. In the recent months there were few decent posts here about that topic with good discussions too.

Reply to  Paul Penrose
October 31, 2018 9:33 am

Paul and Parameter,
In this post, standard deviation is noted on the figures and is a representation of uncertainty.

October 29, 2018 2:48 pm

Tent’s of a degree??????

Tenth’s of a degree……..sorry.

Patrick MJD
Reply to  HotScot
October 29, 2018 3:37 pm

It would probably pass the clisci test.

Reply to  HotScot
October 29, 2018 6:24 pm

This post presents basic statistics for short-term temp fluctuations defined as less than 21 years, interannual spike to spike temperature variation is about 0.5 degrees C shown on Figure 3. The temperature range or 2 SD which includes background noise is 0.4 degrees C shown in Figure 2. Therefore, we can expect natural ENSO events to fluctuate 0.4-0.5 degrees C.

Gary Pearse
Reply to  Renee
October 29, 2018 8:34 pm

Renee, you did a very nice job of analyzing the data for quality based on statistical variation and you have well and truly shown that the early stuff is largely unfit for surmising what the climate may have done.

One dimension not analyzed is the state of flux of the data which is essentially continuously adjusted. Hansen admitted that 1998 with the big El Nino did not set a new record at the time. The30s-40s decade was still the hottest. Since then they have pushed that hot period down half a degree thereby erasing the very cold period of the 60s-70s and moving nearly all the warming that took place by 1940, to the present. It didnt look good to them to have 0.6C rise betwern 1880 and 1940 when CO2 wasnt a factor.

Tom Abbott
Reply to  Gary Pearse
October 30, 2018 5:43 am

“The30s-40s decade was still the hottest. Since then they have pushed that hot period down half a degree thereby erasing the very cold period of the 60s-70s and moving nearly all the warming that took place by 1940, to the present. It didnt look good to them to have 0.6C rise betwern 1880 and 1940 when CO2 wasnt a factor.”

Figure 2. above appears to show the correct temperature profile with a highpoint in the 1930’s-40’s, a highpoint at 1998 and a highpoint at 2016. There are also lowpoints around 1910 and 1976 which were of similar magnitude. This temperature profile is consistent with both the Hansen 1999 US temperature chart profile and with the UAH satellite record.

Figure 2 does NOT resemble any of the bogus Hockey Stick charts like Hadcrut4 which have reduced the above mentioned years to a gently rising curve. The Hockey Sticks have disappeared the 1930’s highpoint and now they have disappeared the highpoint of 1998 in their efforts to promote the “hotter and hotter” narrative. But the true trend is revealed in Figure 2.

Hansen 1999:

comment image

The UAH satellite chart:

comment image

The Earth’s climate warms for a few decades and then it cools for a few decades, and as you can see it has warmed up to modern levels many times in the past without the need for increased CO2 levels. If all that warming and cooling in the past is natural variations and caused by Mother Nature and it’s not any hotter now than in the past, in fact, it is cooler now than in the past, and getting cooler, then why should we assume that only CO2 could cause our current climate?

They are assuming CO2 is causing our current climate just because it is there. They assume it MUST be affecting the climate. To date, there is no evidence this is the case. The CO2 is there, but the extra warmth, not so much. At least not enough to measure.

Everything in this CAGW fraud is based on assumptions. And on lies.

Reply to  HotScot
October 30, 2018 10:57 am

… and if I may add, within a 0.5 degree total range?

All this for a 0.5 degree range? That’s like analyzing flea hairs [I guess fleas have hairs] in a discussion about how to groom your dog’s hair.

Reply to  Robert Kernodle
October 30, 2018 1:29 pm

Considering the IPCC is recommending temperatures don’t increase more than 1.5 degrees C, then 0.5 degree C of natural climate variation is 1/3 of that target.

Reply to  HotScot
October 30, 2018 4:56 pm

Tents of a degree are measured by yutes wit termometers.

October 29, 2018 2:55 pm

If the HC4 data needs a correction before 1950 than I suggest you can use the strength of the Earth’s magnetic dipole as a reference, it has been accurately measured since 1880s
http://www.vukcevic.co.uk/CT4-GMF.htm
Data isvavailable from NOAA

Tom Halla
October 29, 2018 3:02 pm

It is rather uniformative, if not deliberately deceptive, to present historical temperature records and trends as a single line. Perhaps a band, two or three standard deviations wide would be more accurate. It would not fit the narrative some are trying to advance, but . . .

Jim Ross
Reply to  Tom Halla
October 30, 2018 2:13 pm

Fair point, though the data with uncertainty bands are indeed published by the Met Office. HadCrut4 is a combination of CRUTEM4 (land) and HadSST3 (sea surface) and such plots are available at https://www.metoffice.gov.uk/hadobs/crutem4/data/diagnostics/global/nh+sh/index.html and https://www.metoffice.gov.uk/hadobs/hadsst3/diagnostics/index.html, respectively. I think that it is more important to look behind the global annual data, i.e. to look at land/sea data, monthly data and hemispheric data separately. If the data look odd, you need to check individual months/cells – all available on the MO website. For example, take a look at the contrast between recent monthly HadSST3-NH time series since 2003 (which appears to contain significant annual/seasonal cycle effects) and HadsSST3-SH time series (which does not, at least to not to as significant a degree):
http://www.woodfortrees.org/plot/hadsst3nh/from:1990/plot/hadsst3sh/from:1990
You can easily envisage (and easily check using WFT) what the resultant combined global data (e.g. from 2015 to 2017) will look like.

Reply to  Scott
October 29, 2018 3:20 pm

Scott,
It may be a record drought. I’m just pointing out that it is a temperature outlier that also occurs in a poor data zone.

Scott
Reply to  Renee
October 29, 2018 5:10 pm

yep no stress here Renee, I would do the same type of outlier assessment.

Just trying to help with some info that may explain the outlier. I am definitely not having a go at your analysis.

Tom Abbott
Reply to  Scott
October 30, 2018 6:29 am

Thanks for that link, Scott.

I started to suggest that maybe a search of weather-related newspaper articles for 1878 might be a way to establish how hot it was back then.

October 29, 2018 3:17 pm

While this is all very interesting, it is not the answer.

If all of this comes from the wonder molocule CO2, what about clearly explaning just what CO2 can and cannot do. Then the whole Green nonsence should collapse.

MJE

Reply to  Michael
October 29, 2018 5:43 pm

I did find it interesting that interannual temperature anomalies do not show obvious increasing trends in warm spikes due to increasing CO2 trends. You think that would be an initial indicator or sign.

sycomputing
October 29, 2018 3:17 pm

…font is tiny and I am an elder[ling]…

Warren in New Zealand
Reply to  sycomputing
October 29, 2018 4:47 pm

Crtl + is your friend

October 29, 2018 3:19 pm

“While several of these spikes may be associated with El Nino and La Nina events, others may be the result of increased noise within the data.”
That is the problem with this style of analysis. All you can say is that it looked different then. You can’t attribute it to any specific defect in the data; it might be real. The only way to decide is to do your own averaging in which you can see how variability of data has an effect.

I have found that there is indeed a divide around 1957. The reason is Antarctica, where regular observations pretty much started that year. Before that, the gap does contribute to instability in the global average.

And of course, things do get worse before 1880. That is why GISS and NOAA stop there. HADCRUT go back further, but publish their error estimates.

Scott Bennett
Reply to  Nick Stokes
October 29, 2018 10:15 pm

Data shmata!

Let’s be honest about HadCRAP! What ever else it is, it isn’t real data!
It is an ensemble dataset of a hundred-thousand fiddles! That’s 100 areal lashes of tortured realisation for each erroneous grid box! Then – with a straight face – the average of the averages* – that is not the average – is taken Simpson’s paradoxically!

*The average for each grid box is “realised” one-hundred times and then these averages are averaged across the globe one-hundred times!

Geoff Sherrington
October 29, 2018 3:24 pm

It would be very helpful to continue this analysis, to estimate some effects of station changes over time. Put simply, stations are closed or dropped and new ones are added to the list. If stations from cold areas are dropped and replaced by stations from warm areas over time, an artificial warming trend will be created. Some insight could come from calculations using an unchanged set of stations, but this fails because long records at any station are rare. Apologists seem to use the anomaly method of reporting temperature to digitise this effect which should be studied with absolute, not relative temperatures. This mechanism of station selection is rarely mentioned by analysts and it cries out for proper study. I’d reckon that up to half of the alleged warming in the last century could be implicated. Geoff.

Steven Mosher
Reply to  Geoff Sherrington
October 29, 2018 4:13 pm

100% wrong Geoff. You dont need long stations.. at all

And we have done it without using anomalies.

HOW? we used the kriging YOU SUGGESTED

Number of stations

Here experiment

https://tools.ceit.uq.edu.au/temperature/#

Reply to  Steven Mosher
October 29, 2018 5:01 pm

Steven,
In 1878, the outlier appears to be associated with a NH land based station reading as far as I can tell.

Gary Pearse
Reply to  Renee
October 29, 2018 8:46 pm

I guess going forward we can see if the weather ever does jump around from time to time and get some idea of what can be expected in terms of variation. BEST saw a step up as invalid and slid the before and after to eliminate it. In some but not all cases there was not a station location change as I understand it. This means to me that BEST would eliminate any future Younger Dryas with their method.

Geoff Sherrington
Reply to  Steven Mosher
October 29, 2018 11:49 pm

100% correct, Steven. My suggestion is to look at station dispositions in more detail. It is this mechanism that causes the change from the early Hansen work with the 1940-80 cooling, compared with the no-cooling picture in modern data. There are unexplained questions to be answered.
BTW, I have never encouraged use of the anomaly method. The impression that it reduces mathematical uncertainty is junk science.
I have long encouraged use of geostatistics including kriging, but it is questionable to use origins on anomaly data. I would not use it without a bundle of caveats, or at all.
Steven, one or both of us is suffering from memory loss and I am not so sure it is you, so keep writing. Geoff

Gary Pearse
Reply to  Geoff Sherrington
October 30, 2018 9:17 am

Kriging is used successfully in calculating reseves of ore in hardrock or large alluvial gold, diamonds, tin…but they start with accurate assay data from a sampling pattern. It is not going to give you values of much use for warming of fractions of a degree over a century when the data points are, in the best of cases +/-0.5C. Since fewer than half of the stations are demonstrably any whrte near that goos and the distribution is poor, forget it. Load the ocean with more buoys, set up arctic stations – at least you get enhancement to work for you.

LdB
Reply to  Steven Mosher
October 29, 2018 11:59 pm

Steven neither side likes what you have done and there are obvious problems with it which your group has refused to discuss or argue effectively. At a basic level your reconstruction connects a whole pile of unicorns and because most of the data is going up your construction goes up. What it fails to do is clarify the situation at all, you have just blended a whole pile of existing increasing problematic data.

To the pro side your flawed blending just produces a slightly lower prediction. To the con side you don’t address the fundamental issues of the data and models itself. To pragmatists like myself who don’t believe emission control will work, your result basically says nothing is going to get bad enough for us to care about.

Science is not just research at it’s core it is about research that is useful and to all 3 groups what you are doing fails at being useful.

Phoenix44
Reply to  Steven Mosher
October 30, 2018 2:13 am

It always makes me laugh when statistics claims to overturn logic.

Imagine a world with one temperature station. Now move it somewhere that may or may not have different readings.

Solomon Green
Reply to  Steven Mosher
October 30, 2018 12:34 pm

Mr. Mosher,

Thanks for the link. I ran a 1 in 10 sample for the period 1930 to 1980 and found the trend was -001 per decade. I was quite excited at what I thought was an interesting trend. Idiot that I am, I ran another 1 in 10 to confirm the – 001 per decade, then a third and then a fourth and then … Each time I got the same trend. Too good to be true? Idiot that I am it took me half a dozen repeats to realise that I was getting the same random selection each time. RANDOM ???

D. J. Hawkins
October 29, 2018 3:37 pm

If the coverage is randomly decimated in the latter two periods to reflect the global coverage during 1850-1880, do similar extremes emerge?

Steven Mosher
Reply to  D. J. Hawkins
October 29, 2018 4:10 pm

dont expect that kind of analysis here

D. J. Hawkins
Reply to  Steven Mosher
October 30, 2018 5:11 pm

Interesting, but that appears to be an attempt at sampling the whole set to see if the smaller sample retains the global characteristics, not a set that would reflect the coverage during 1850-1880, which we know is not geographically random.

October 29, 2018 3:37 pm

Blimey! all that detrending just to look at one warm spike. It isn’t much bigger than the 2015-16 El Nino spike when the 2014-15 El Nino is also included:
http://www.woodfortrees.org/graph/hadcrut4gl/from:1877/to:1879/plot/hadcrut4gl/from:2014/to:2017

Reply to  Ulric Lyons
October 29, 2018 4:00 pm

Ulric,
The detrending was done to evaluate all the warm and cold spikes from a zero baseline or datum, not just the 1878. It popped out as an outlier.

Reply to  Renee
October 30, 2018 8:38 am

So there are two Super El Nino episodes in the series, which differ marginally.

October 29, 2018 3:41 pm

How can post-1950 be acceptable? There are fewer weather stations now than in 1960 covering less surface area.
https://data.giss.nasa.gov/gistemp/history/

The data base is inadequate at any time to determine anything and especially to construct computer models.

Latitude
October 29, 2018 3:57 pm

Personally I think it’s hysterical….according to the Met they can’t even accurately measure temp today
…they will adjust it tomorrow

and then turn around and say they can measure the temp 100 years ago

October 29, 2018 4:01 pm

In line with the IPCC, I want to know how HADCRUT 4 global interannual temperature anomalies compare between:
•The year 2003
And…
•The year 2033.
That is what is relevant now. If HADCRUT 4 can’t tell us that then it clearly doesn’t relate to climate, by the IPCC definition.
And thus the IPCC would find it to be worthless.

Philo
October 29, 2018 4:05 pm

Much of the data is further confounded by urbanization and industrialization. Many weather stations have been dropped and/or moved, the results have been smoothed, and other stations infilled, equipment changed, not operated properly.

Overall, to me trying to measure temperatures all over the globe in a scientific manner hasn’t even started yet. The closest thing available is the RSS and UAH measures, and they don’t necessarily agree with each other, though both come primarily from the same instruments.

LdB
Reply to  Philo
October 30, 2018 6:37 am

The earth distance from the sun varies as does it’s speed. The earth is closest and fastest to the sun in January (Southern hemisphere summer) and it is slowest and maximum distance in July (Northern hemisphere summer). The result of this is length of the seasons are different by over 10 full days.

Now think about having a uniform warming and look at what RSS and UAH are measuring. They won’t agree even in a perfect world for one you need to include the impulse waveform shape :-).

Walt D.
October 29, 2018 4:15 pm

Remember the old adage:
You can’t make a silk purse from a pig’s ear.

John Tillman
October 29, 2018 4:51 pm

As in the USSR, only the past is fungible. The future is certain!

And the present is whatever the gatekeepers want it to be at that particular moment.

John Tillman
October 29, 2018 5:07 pm

As bad as it wanna be!

Or need to be to further the scam and keep up the skeer.

With apologies to CSA GEN of Cavalry NBF.

Bob boder
Reply to  John Tillman
October 29, 2018 5:25 pm

You mean KKK NBF? Use another reference please.

John Tillman
Reply to  Bob boder
October 29, 2018 6:04 pm

In today’s climate, you may have a point.

But IMO, NBF has gotten a bad rap.

OK, so he was a slave dealer. So troops under his command might have murdered surrender black Union troops at Ft. Pillow.

But his personal guard company included black Southerners loyal to the CSA and to him personally. And, while he was a mover, shaker and arguably founder of the KKK, he soon separated from the organizations when its actions became, in his estimation, “unsound”.

This will earn me no PC points, but NBF was not the vicious white supremacist as he has been portrayed. I could well be wrong of course, but my take on the man is that he recognized the common humanity and indeed sterling qualities of black Southerners, but rebelled against the carpetbagging Northern and Scalawag domination of the South after the Civil War.

I could be wrong of course, but IMO he night-rode against Republicans, whether white Northern carpetbaggers or black and white Southern scalawags. There were vicious white supremacists in the immediate Antebellum South, but IMO NBF wasn’t among them.

It’s too easy, in our more enlightened times to tar all white Southerners with the same brush. Longstreet and a host of other former CSA officers became Republicans or felt, as he did, that the South should first have freed black Southerners, then seceded.

Nothing was more surely writ than that black Americans would be free. It’s possible to argue that, due to Jim Crow after the end of Reconstruction, the war delayed full citizenship for black Southerners, at the cost of 750,000 lives among soldiers and who knows how many black and white civilians.

John Tillman
Reply to  Bob boder
October 29, 2018 7:41 pm

Bob,

Short version is that NBF’s KKK was a Democrat Party guerrilla force aimed against Republican domination of the South, whether the GOP candidates and voters were black or white.

When the KKK turned violent, rather than merely intimadatory, NBF gave up his Grand Kleagleship, or whatever rank he had. Unlike subsequent Democrat politicians well into the late 20th century.

Not condoning any night riding against occupying GOP forces, but IMO NBF differed from later KKK fellow travelers like President Wilson and Senator Byrd in that he opposed lethal violence against Republicans, whether black or white, while nevertheless remaining a loyal Democrat, both at Fr. Pillow and after the war.

Editor
October 29, 2018 5:52 pm

If Renee Hannon doesn’t run for president of AAPG… It’s time to start a write-in campaign!

Remee
Reply to  David Middleton
October 29, 2018 7:17 pm

Thanks David, for reminding me to renew my AAPG subscription.

Frank
October 29, 2018 7:35 pm

Renee: If you look at the rate at which forcing allegedly has increased, it doesn’t make much sense to linearly detrend across the whole period. Assuming warming is being driven by the growth in all forcing, two periods would be more appropriate 1860-1960 and 1960 to present. FWIW, attribution studies suggest the warming before 1950 was not mostly driven by rising GHGs.

http://www.ipcc.ch/report/graphics/images/Assessment%20Reports/AR5%20-%20WG1/Chapter%2008/Fig8-18.jpg
AR5 WG1 Figure 8-18

Reply to  Frank
October 30, 2018 7:25 am

Frank,
Yes I agree that linearly detrending across the whole period does not remove underlying long-term trends. One would need to use multiple linear trends across several periods as you suggested. I simply posted it for comparison to the running average detrending which does a much better job at flattening the data.

October 29, 2018 7:48 pm

There’s a couple things I’d like to ask, being one who dabs in data analysis myself.

Does the running-average detrending affect the spike shape? I am a bit dubious because running-average also works as a (lousy) low-pass filter, which may also remove high-frequency significant spikes and distort the remaining ones.
Why use a 21-year window? (I’m not implying it is wrong, I would like to know the reason why tho).
The segmented regression technique seems more appropriate to detrend such a time-series, and it has no filter effect.

What method do you use for spike detection? I know for experience that those are very sensitive to parameters like window width.

Reply to  Flavio Capelli
October 30, 2018 5:30 am

Centered running averages from 16 to 41-year periods were evaluated. The 16-year running average was still noisy and not very smooth suggesting short term variations are still retained in the average. The 31 and 41-year averages began to trend above the midpoint of temperature troughs and below the midpoint in temperature peaks. This will tend to underestimate temperatures in lows and overestimate temperatures in peaks during detrending. Additionally, the elimination of data on the tail ends for 31 and 41-year averages is excessive. Therefore, 21 years was chosen.

Warren
October 29, 2018 9:12 pm

hadCRUDbe4

Alan Tomalty
October 29, 2018 9:16 pm

https://www.sciencedaily.com/releases/2010/03/100308203308.htm

There is so much we don’t know. Just reading the abstract can give you a glimpse of what could really be driving climate change. We know that the earth’s rotation has not been constant over it’s 4.6 billion year life. The article points to theoretically being able to calculate many parameters of the earth. Could fluctuations of ice ages based on temperature be connected to the rotation speed? The world’s distraction with CO2 is holding back a lot of good science exploration.

October 29, 2018 11:09 pm

detrended – Wiktionary
https://en.wiktionary.org/wiki/detrended
English[edit]. Adjective[edit]. detrended (not comparable). (statistics, said of data) having long-term trends removed in order to emphasise short-term changes.

I have never understood what this word means. I do understand that historic temperature records get worse the farther back you look. I also understand that reporting changes for the worse over time is loved by the alarmists.

I am not sure this word means anything at all, other then, “We have altered the data.” My engineering professors would have failed me and drummed me out of school if I had ever tried this…

Robertfromoz
Reply to  Michael Moon
October 30, 2018 1:33 am

If the data is not reliable and obviously wrong there is a problem full stop , using this data and trying to homogenise it further leaves something that looks like data smells like data but it’s really a pile of horseshit .
Scrap the lot and start again which will give the IPCC a chance to add 10 years to the deadline for when we all burn in hell . (Sarc).

Once you adjust observed actual temps or adjust for stuff ups your data has stopped being fit for purpose.

Flavio
Reply to  Robertfromoz
October 30, 2018 2:22 am

Detrending is not homogenization, neither adjusting temperatures.

It is a legit operation performed on time-series to remove some features (specifically, long-term trends) before proceeding with the analysis of other features – like periodic phenomena using a Fourier analysis.

Detrending is not altering data, when done for the proper purposes.

Greg
Reply to  Flavio
October 30, 2018 4:07 am

detrending IS altering data , whatever the motivation.

It is often better to take the first difference of a data series before FA. Espeicially is there is strong autocorrelation as there is in physical data like temperature.

Reply to  Greg
October 30, 2018 4:25 am

Let me rephrase then: “Detrending is not doctoring the data to produce a false warming signal.” Because that’s what I think the other commenters above implied.

“It is often better to take the first difference of a data series before FA.”
Could you point to some sources regarding this? I am quite interested in FA of time series.

Greg
Reply to  Greg
October 30, 2018 5:35 am

” I am quite interested in FA of time series.”

I don’t have anything I can link, though it should not be hard to find by searching. It’s pretty standard.

I suggest you first look into the preconditions required for FA to be applied. One thing is no long term shift if the mean. That is often what is being addressed by linear ” detrending”

Since FA essentially repeats the TS end to end to produce an infinite series any trend will produce a ramped saw tooth which will also affect in the freq domain. Differentiation turns the lin. trend into a const which does not affect the FA. It also acts as a high pass filter and attenuates longer periods, so this has to be borne in mind in looking at the results.

You can look up AR1 auto-correlation and using first diff ( point to point subtraction ) to remove it. Any periodic signals will be preserved under differentiation.

You may like to look into the frequency response of a few filters ( like gaussian vs runny mean ) . Every other lobe on RM is actually negative meaning that it inverts that part of the data.
comment image

I hope that points you in the right direction.

Peter Sable
October 30, 2018 12:26 am

Why moving average filters are bad:

https://www.researchgate.net/post/What_are_the_disadvantages_of_moving_average_filter_when_using_it_with_time_series_data

You can have up to 25% error on peak analysis that you are doing because the first bounce in the stop band is about 25% of the original signal AND its phase is inverted.

Reply to  Peter Sable
October 30, 2018 2:47 am

It’s better to use a segmented linear regression for detrending.

Greg
Reply to  Flavio Capelli
October 30, 2018 4:01 am

It is better to use a proper low-pass filter.
some suggested here:
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

Reply to  Greg
October 30, 2018 4:32 am

Then subtract the filtered series (low frequency components) for the original one so that only the high-frequency components are left, correct?

Still, I wouldn’t expect much difference between different valid methods.

On the other hand, the frequency response of running means is bad.

Greg
Reply to  Flavio Capelli
October 30, 2018 5:46 am

Yes, HP = 1 – LP . Standard filter basics.

The key word is “valid” and RM is not a “valid” filter. Others have pros and cons you can debate about and chose. If you think RM is not too good but probably OK in practice, try some data. Here is one of the comparisons from my article.

comment image

Just look at what happens around the largest troughs and peaks: RM often inverts a peak. That is why I am so scathing about this article with aims to look at peaks after such a distorting filter has been used.

Look at the SSN example there too. The peak of the last cycle gets clobbered and the 13mo RM ends up with a later peak. Despite my bringing this to attention of the maintainers and their having admitted it was a problem, This is still the preferred filter they use.

RM is so ingrained no one seems to be able to let it go. The fact they you may actually have to do some work and learn seems to be an insurmountable barrier for most people, even Phd scientists.

Greg Goodman
Reply to  Peter Sable
October 30, 2018 4:00 am

Thank you Peter. Same point I have just made.

Looking at fig1b : the height of the 1878 spike is about 10% bigger after filtering. How does that work ?

The 1878 warm temperature spike is the most significant interannual temperature anomaly over the past 160 years. The temperature amplitude is almost double in magnitude over all other temperature amplitude spikes. This recorded temperature outlier is most likely erroneously high due to sparse global coverage and increased data noise.

What is the basis of that conclusion? Where is the expectation that all spikes should be the same size and if they are not they must be erroneous? That’s without noting that the spike was artificially increased by the crappy data processing.

This article is severely flawed from the outset and is not a basis for criticizing HadCRUFT4.

It would make more sense to ask you have a dataset which mixes incompatible land and sea data.

Hermit.Oldguy
October 30, 2018 1:06 am

“During this period global data coverage is excellent at greater than 75% …”

Is this science, or comedy?

Greg Goodman
October 30, 2018 3:16 am

Another method used here is a centered running average of 21 years to detrend the temperature dataset. Since averaging degrades the tail end of the data, the last 10 years used a simple linear regression to extend past the running average.

Oh for pity’s sake , here we go again.

RUNNING MEANS ARE CRAP. Find a decent filter.
Please read and understand : Data corruption by running mean “smoothers”
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

The idea of using a high-pass filter for “detrending” is fine and you can subtract the result of a low-pass filter to achieve that end.

Averaging does not “degrades the tail end ” it removes it , since you can’t find the average. If you switch to a different method you are just introducing mess into the data. If there’s not data , live with it.

The rest of the article is not worth reading, since the author simple does not have the first idea what he is doing to the data. Many of the “spikes” he is looking will either a result of or at least corrupted by the data distortions of the running he starts with.

Greg
Reply to  Greg Goodman
October 30, 2018 5:50 am

Look at the height of the 1978 peak above the previous trough. In fig 1b it is about 10% higher than in fig1a.

What’s up with that ?

The author then does on the conclude it is anomalously high. Some of that was his fault.

Reply to  Greg
October 30, 2018 6:45 am

I agree using the previous trough, then the 1878 peak is slightly higher in fig 1b than 1a (0.58 versus 0.55 deg C), not quite 10%. If you use the following small trough, then the 1878 peak is the about the same in fig 1a and fig 1b (0.40 versus 0.39 deg C, respectively).

However, if you use the following large trough, then the 1878 peak is larger in fig 1a than 1b (0.61 versus 0.55 delta deg C, respectively). So which trough do you use, the previous or the following? That’s why it’s important to flatten the data to remove underlying trends which is what fig 1b attempts to do.

Greg
Reply to  Renee
October 30, 2018 8:00 am

From your first para , it is clear that the 21y was sticking bump under the 1878 peak This is because the filter turned the peak into a trough which gets inverted it when subtracting the two. This is exactly the kind of thing I was pointing to.

You will see how big it is if you plot the gaussian and RM together.

It may be not big enough to change the identification of this as the largest peak, but ‘phew I got away with it’ is not a good basis for any analysis. How big the effect is depends on the nature of the data in relation to the window length of the filter. It is totally unnecessary distortion which can be avoided by using a proper filter. If you’ve equipped yourself with some better options, I’m glad you picked up on my comments.

Greg

Reply to  Greg
October 30, 2018 9:14 am

Greg,
I specifically used 21y because it did not stick a bump under any peaks and invert them. Now if you use an RM less than 21 years, like 16 years for example, then you start seeing bumps.

https://imgur.com/51SVjTr

Greg
Reply to  Greg
October 30, 2018 11:48 am

If you do a tripple RM ( close to a gaussian ) you will find there is some deviation around that period. You can also the see the amount of short term variation that gets past even a very long period RM.

http://woodfortrees.org/plot/hadcrut4gl/mean:252/mean:188/mean:140/plot/hadcrut4gl/mean:252

You will also see that there are RM runs persistently higher and lower over extended periods. All of this is very dependent on the data so you can not know whether there is a data inversion without comparing to a clean filter.

You can usually get a better “smoothing” with comparably shorter 3RM eg.
http://woodfortrees.org/plot/hadcrut4gl/mean:144/mean:107/mean:80/plot/hadcrut4gl/mean:252

I actually don’t see what the point of this article was, since there is no way to say whether there was more climate variability at the end of 19th c. or not , certainly not just by looking at this data. That would very well be a true difference at that time.

BEST land data goes back further and shows much greater swings in the early part of the record.

Climate does change, we have to live with that.

Reply to  Greg
October 30, 2018 1:51 pm

Greg,
I actually did look at triple RM, it is smoother but you lose even more data on the tail ends as shown in your graphs.

Reply to  Greg Goodman
October 30, 2018 7:05 am

Greg,
Of course there are more sophisticated filters than a running average such as gaussian or loess filtering. As I mention in the first sentence of the post, this was a coarse screening analyses.

Switching methods only affected the last 10 years of data out of 160 years and impacts one peak, 2016.

Greg
Reply to  Renee
October 30, 2018 7:50 am

Neither gaussian nor RM will provide you a value for the last 10y on a 21y window. So I don’t quite get how that change can affect 2016.
Applying an ad hoc , home made pastiche of techniques is not good. ( typical climatology games ).
Maybe you used loess which applies different filtering in order to fudge a result for the end sections, so I won’t use it.

A gaussian would be fairly typical choice for this kind of job. If you have sussed out how to use some well behaved filters, that will be a bonus for future work. Good man.

Reply to  Greg Goodman
November 2, 2018 11:01 am

Here’s the HadCRUT global dataset isolated with a loess filter. Looks very similar to the running average. 1878 still looks interesting.

https://imgur.com/yc1rCHb

October 30, 2018 6:03 am

Speaking of bad data, the weather station at the Cornell Cooperative Farm south of Canton NY used to be a mile away at a higher altitude in the middle of a field on a flat bit of land. Now it is at the farm itself, 15′ from a parking lot , 20′ from a house, on the north slope of a hill.

None of that shit is in the official record of the site.

Global warming is nonsense.

tty
October 30, 2018 6:22 am

As for the 1878 spike there is little doubt that the 1877-78 El Nino was an exceptionally strong one, quite possibly the strongest for several centuries. It caused very large famines in Asia and and extreme weather events in South America.

http://repositorio.uchile.cl/bitstream/handle/2250/125772/Aceituno_Patricio.pdf?sequence=1&isAllowed=y

LdB
Reply to  tty
October 30, 2018 6:57 am

The interesting part of that answer is how confident are you about the readings in 1878. You may want to look at history of temperature measurement. The Stevenson screen doesn’t even take it’s standard form in publication until 1884 and then there is some implementation lags. Your great famines may be nothing more that a moderate weather event exacerbated by other population or local events. The trouble with historical events with little actual reliable data is it is easy to make a story and that cuts both ways to both sides of the argument.

tty
Reply to  LdB
October 30, 2018 2:02 pm

No matter how unreliable temperature readings were in 1878 they were probably not much worse than in 1877 or 1879, and meteorologists back then were well aware of the need for screening thermometers even though they used somewhat different screens.

“Your great famines may be nothing more that a moderate weather event exacerbated by other population or local events.”

They killed about 3 % of the world population, so probably were rather more than local events.

Jim Ross
Reply to  tty
October 30, 2018 2:47 pm

tty,

I completely agree with you and Scott (who commented upthread). There is very strong evidence that the temperature spike in 1878 (which may or may not be as big as shown) was largely a consequence of a major El Niño. This would be entirely consisent with the more recent temperature spikes, which also correspond with El Niños (as well as spikes in CO2 growth rate). Exceptions appear to reflect”interference” effects of volcanic eruptions or other issues, which may well include data busts or coverage bias. I think that Renee’s analysis is a good start and I do not wish to undermine that in any way. At the same time (as I commented above) we need to be very careful trying to interpret data that reflect multiple sources and/or the whole globe, when regional effects may be a significant complicating factor. For example the spike in 1878 would seem to show up best in the tropics SST data, as would be expected if the primary cause is El Niño.

knr
October 30, 2018 8:04 am

Science 101 if you cannot accurately measure it you can only ‘guess it ‘ and calling it anything but a guess , such as theory or ‘projection ‘ and using models does not change the reality that you need to take these approaches because you are able to measure. in the first place.

Willard
October 30, 2018 8:15 am

Just a couple of nit-picky things:

“which effect temperature, wind and precipitation” looks like it should have “affect” instead of “effect.” While some things do create temperature, wind, and precipitation, I think you meant that they change them.

“This warm spike is an order of magnitude greater than any other temperature excursion from a zero baseline” made me think the spike would be 2° C larger since the others seem to be 0.2° C above the baseline. However, it was only 0.4° C above the baseline. I may be misusing the term, order of magnitude, but I thought it meant that we had something ten times smaller or larger than the items to which we’re comparing them since we use a base-10 system. My head cold might be affecting my thoughts on this, though.