Guest Post by Willis Eschenbach
I have long suspected a theoretical error in the way that some climate scientists estimate the uncertainty in anomaly data. I think that I’ve found clear evidence of the error in the Berkeley Earth Surface Temperature data. I say “I think”, because as always, there certainly may be something I’ve overlooked.
Figure 1 shows their graph of the Berkeley Earth data in question. The underlying data, including error estimates, can be downloaded from here.
Figure 1. Monthly temperature anomaly data graph from Berkeley Earth. It shows their results (black) and other datasets. ORIGINAL CAPTION: Land temperature with 1- and 10-year running averages. The shaded regions are the one- and two-standard deviation uncertainties calculated including both statistical and spatial sampling errors. Prior land results from the other groups are also plotted. The NASA GISS record had a land mask applied; the HadCRU curve is the simple land average, not the hemispheric-weighted one. SOURCE
So let me see if I can explain the error I suspected. I think that the error involved in taking the anomalies is not included in their reported total errors. Here’s how the process of calculating an anomaly works.
First, you take the actual readings, month by month. Then you take the average for each month. Here’s an example, using the temperatures in Anchorage, Alaska from 1950 to 1980.
Figure 2. Anchorage temperatures, along with monthly averages.
To calculate the anomalies, from each monthly data point you subtract that month’s average. These monthly averages, called the “climatology”, are shown in the top row of Figure 2. After the month’s averages are subtracted from the actual data, whatever is left over is the “anomaly”, the difference between the actual data and the monthly average. For example, in January 1951 (top left in Figure 2) the Anchorage temperature is minus 14.9 degrees. The average for the month of January is minus 10.2 degrees. Thus the anomaly for January 1951 is -4.7 degrees—that month is 4.7 degrees colder than the average January.
What I have suspected for a while is that the error in the climatology itself is erroneously not taken into account when calculating the total error for a given month’s anomaly. Each of the numbers in the top row of Figure 2, the monthly averages that make up the climatology, has an associated error. That error has to be carried forwards when you subtract the monthly averages from the observational data. The final result, the anomaly of minus 4.5 degrees, contains two distinct sources of error.
One is error associated with that individual January 1951 average, -14.7°C. For example, the person taking the measurements may have consistently misread the thermometer, or the electronics might have drifted during that month.
The other source of error is the error in the monthly averages (the “climatology”) which are being subtracted from each value. Assuming the errors are independent, which of course may not be the case but is usually assumed, these two errors add “in quadrature”. This means that the final error is the square root of the sum of the squares of the errors.
One important corollary of this is that the final error estimate for a given month’s anomaly cannot be smaller than the error in the climatology for that month.
Now let me show you the Berkeley Earth results. To their credit, they have been very transparent and reported various details. Among the details in the data cited above are their estimate of the total, all-inclusive error for each month. And fortunately, their reported results also include the following information for each month:
Figure 3. Berkeley Earth estimated monthly land temperatures, along with their associated errors.
Since they are subtracting those values from each of the monthly temperatures to get the anomalies, the total Berkeley Earth monthly errors can never be smaller than those error values.
Here’s the problem. Figure 4 compares those monthly error values shown in Figure 3 to the actual reported total monthly errors for the 2012 monthly anomaly data from the dataset cited above:
Figure 4. Error associated with the monthly average (light and dark blue) compared to the 2012 reported total error. All data from the Berkeley Earth dataset linked above.
The light blue months are months where the reported error associated with the monthly average is larger than the reported 2012 monthly error … I don’t see how that’s possible.
Where I first suspected the error (but have never been able to show it) is in the ocean data. The reported accuracy is far too great given the number of available observations, as I showed here. I suspect that the reason is that they have not carried forwards the error in the climatology, although that’s just a guess to try to explain the unbelievable reported errors in the ocean data.
Statistics gurus, what am I missing here? Has the Berkeley Earth analysis method somehow gotten around this roadblock? Am I misunderstanding their numbers? I’m self-taught in all this stuff and I’ve been wrong before, am I off the rails here? Always more to learn.
My best to all,
w.
richardscourtney says:
August 19, 2013 at 8:00 am
NO!
If incoming and outgoing energy are in balance, the earth’s EFFECTIVE RADIATIVE TEMPERATURE remains constant.
Woot you got that bit that …. we can do a full closure if you simply now measure the suns output because that varies.
So now we drill each of the earth energies apart.
Full balance …. so you are monitoring energy in temperature change
Full balance ….. so you would be monitoring Stephens PE
Full balance ….. so you would be monitoring chemical uptakes ocean/plant etc
Full balance ….. so you monitor any thermal energy coming out of earth
I am not a climate scientist there are probably a lot more like winds etc
Different groups may argue which things are climate change some may want to stick Stephens PE in some wont they argue they are only interested in temperature.
It matters not you can easily work the views between those two reference frames.
So we have group A who says temperature is the only Climate change. Group B says no climate change is temperature and PE gain.
Group B publishes a report saying PE is increasing we have terrible climate change. Group A says no look the temperature is constant there is no climate change. See you can reconcile the two views and eventually groups will start talking about energy which is the reality of what you are trying to do.
The basic problem is that the warmistas use averaging as a panacea for errors. Averaging can only remove RANDOM error, not systematic error. Most of the error is systematic. Bad siting. Bad adjustment methods. Etc. You can sink months into trying to get them to understand that and get nowhere. I learned it in high school chemistry, but they didn’t, it would seem.
There is at least one degree F error bar being ignored.
The reality is that we can’t get 1/2 C warming measured. Period.
The other polite lie is that climate is a 30 year average of weather. It isn’t. There are 60 year weather cycles, so a 30 year average as “climatology” is just a very wide systematic error.
LdB:
I am replying to your post at August 19, 2013 at 8:25 am
http://wattsupwiththat.com/2013/08/17/monthly-averages-anomalies-and-uncertainties/#comment-1394540
Firstly, I remind that this thread is about assessment of measurement errors in determinations of global temperature. It is NOT a forum intended for you to demonstrate your ignorance.
Your post makes no attempt to discuss my point which demonstrated you don’t understand the difference between the Earth’s global temperature and the Earth’s effective radiative temperature. Instead, it digs your hole deeper by another kilometer.
However, your post does have a slight relevance to this thread – or to be precise – refutation of it has a relevance.
You are asserting the the Earth’s thermal balance determines the Earth’s global temperature. That would only be true if the Earth had achieved thermal equilibrium, and the Earth NEVER achieves thermal equilibrium.
The pertinence of your claim to this thread is as follows.
1.
There has been a stasis in the Earth’s global temperature for more than 16 years.
2.
Climastrologists had predicted the Earth’s global temperature would rise over the period.
3.
Some climastrologists claim this failure of their prediction is because heat has gone into the deep ocean.
4.
The claim is improbable because there is no indication of the ‘missing heat’ in the oceans and no indication of how it got to deep ocean, but the claim is not impossible because there are possible ways the heat may have got there.
5.
The possibility of this claim alone demonstrates that your assertion is wrong over the time scales being assessed by determinations of global temperature: i.e. the thermal balance of the Earth does not determine global temperature over time scales less than centuries.
Hence, your assertion is (just) relevant to this thread because refutation of it (here listed as points 1 to 5) shows the importance of a clearly defined global temperature metric with sufficient accuracy to assess the change in global temperature over the time of the present century.
Richard
If we had regional 3-month moving averages of the kind we get with the various ENSO sections of the Pacific, we might have a much better understanding of “cycles”. That said, I hate the word “cycles”. I prefer weather pattern variations. It removes the connotation that weather cycles – or oscillations – can be mathematically cancelled out. We know from Bob’s work that La Nina does not cancel out El Nino effects. Or visa versa. Weather pattern variations do not cancel each other out either.
What we need is -again- a set of 3-month running averages on a regional basis (and I would use the broad regions in the US as defined under the ENSO temperature and precipitation boundaries arrived at through statistical means) to better understand these patterns. Basically, whatever is used should be the same type of averaged data sets used to report oceanic and atmospheric data.
E.M.Smith says: @ur momisugly August 19, 2013 at 9:30 am
Thanks EM that is my take too.
Mosher mentions ‘We used kriging as has been suggested’ and that ‘nugget’ is used to quantify error instead of the more traditional methods.
You might be interested in my comment and what I dug up and especially what J W Merks, Vice President, Quality Control Services, with the SGS Organization, a worldwide network of inspection companies that acts as referee between international trading partners has to say about that technique.
Gail Combs:
Thankyou very much for your comment at August 19, 2013 at 9:50 am which draws attention to your earlier comment at August 18, 2013 at 2:23 am
http://wattsupwiththat.com/2013/08/17/monthly-averages-anomalies-and-uncertainties/#comment-1393740
I, too, write to draw attention to your earlier comment which I have linked in this post.
It is directly pertinent to my point repeatedly made in this thread; viz.
There can be NO meaningful determination of the error in a datum, and there cannot be a calibration standard, for a metric which does not have an agreed definition.
My point is true for every metric including global temperature which has no agreed definition.
Richard
Richard,
I was trying to make the point that the word ‘ERROR’ has been redefined to mean something entirely different than what it means to a Quality Engineer. Determining and minimizing systematic ‘Error’ is a major job for QC Engineers so using ‘Nugget’ in a computer model to redefine ‘Error’ goes against the grain.
Here is the NEW definition again.
No need to get off your duff and do the hard and frustrating work of chasing down systematic errors, just use a computer model for infilling data never measured and the ‘nugget’ for ‘estimating error’
Jeff, your critical position was that I had mistaken weather noise for error and that I had represented state uncertainty as an error. You publicly abandoned the first position as untenable after finally looking at the Figures and Tables in my paper, so I fail to see how your “critical position hasn’t changed.”
right on, Gail. There’s nothing like actually struggling with an instrument, to give one a handle on the intractability of systematic error. The only way to deal with it is to find and eliminate the sources and/or to calibrate its impact against known standards.
Gail Combs:
Thankyou for your post at August 19, 2013 at 10:34 am which says to me
Yes, I understood that and the definition used by “a Quality Engineer” was the definition used by scientists. As I said, I think your point and explanation are very important which is why I wrote to draw attention to it.
Indeed, although I am writing to acknowledge your post to me, I think your point to be so important that I will use this post as an excuse to again post a link to your point for the benefit of any who have not yet read it.
http://wattsupwiththat.com/2013/08/17/monthly-averages-anomalies-and-uncertainties/#comment-1393740
Again, thankyou.
Richard
So what I’m getting from all this long long discussion is that the predictions of yearly temperature increases fall well within the first sigma of the ERROR of the maths being used.
A farmer once told me math couldn’t be used to make labor easier “a squared up pile of bullshit may look neater but you still have to shovel it someplace”
LdB.
The conversion of energy to and fro between KE and PE is a critical issue because in each case that conversion is a negative system response to the radiative characteristics of GHGs or any other forcing element other than more mass, more gravity or more insolation.
If a molecule absorbs more energy than those around it it will rise until the excess KE is cancelled by the conversion to PE.
If a molecule absorbs less energy than those around it it will fall until the KE deficit is restored by a conversion from PE.
The process of up and down convection automatically uses density differentials to sort molecules of differing thermal characteristics so that they alter position within the gravitational field which cancels the thermal effect by juggling between KE and PE.
So the atmosphere freely expands and contracts according to compositional variations but it all happens within the vertical column with no effect on surface temperature except perhaps a a regional or local redistribution of surface energy by a change in the global air circulation.
We see such redistributions of surface energy in the form of shifting climate zones and jet stream tracks but all the evidence is that sun and oceans cause similar effects that are magnitudes greater than the effect from changing GHG quantities.
@Gail:
Oh dear. A whole ‘nother layer of jiggery pokery…
@Stephen Fisher Wilde:
At the point where he was equating temperature (KE by definition) with total energy, I wrote him off. Wasting your breath….
Richard and Pat,
I am not a statistician and I am ‘Computer Challenged’ Though I have been on teams who used computers to do design experiments and I ran computer assisted lab equipment.
What I am seeing is two sets of people. One set is engineers/applied scientists and the other set are theoretical and computer types.
To my mind the big question is: Is a temperature data point a single sample or can you, with statistical validity, group it with the temperature in the next city or hour or day?
The second question is IF you do such groupings what does that do to the error?
Here are three cities close together in NC (within 100 miles) with todays temp at 3:00 PM
Mebane NC (76.1 °F) with Lat: 36.1° N
Durham NC (73.6 °F) with Lat: 36.0° N
Raleigh NC (72.2 °F) with Lat: 35.8° N
We know anomalies are used to deal with such differences so lets look at yesterdays high, low, mean
This is what Weather Underground gives me for historical data for those cities (Not exactly what I actually wanted)
Burlington, NC ( 75.0 °F°F) ( 66.9 °F) (70 °F)
Raleigh-Durham Airport NC (80.1 °F) (68.0 °F) (74 °F)
Raleigh-Durham Airport NC (80.1 °F) (68.0 °F) (74 °F)
However note that today’s Mebane/Burlington is ~ 3°F warmer while yesterday it was ~ 5°F cooler. The computer and theoretical types will try to tell us they can perform magic and make the numbers the same. The engineers/applied scientists will start looking for WHY the numbers are different.
So back to the questions.
Is grouping these data points statistically valid and if you do group them does not the ERROR increase instead of decreasing?
Gail Combs:
Your post addressed to me and Pat at August 19, 2013 at 12:26 pm
http://wattsupwiththat.com/2013/08/17/monthly-averages-anomalies-and-uncertainties/#comment-1394718
asks
The grouping(s) may or may not be valid and the grouping(s) may increase or decrease the error depending on why you grouped them and how you grouped them.
For example, each datum you have presented provides temperature with known accuracy and precision for one place and time. If that information is what you wanted then it would be wrong to ‘group’ the datum with any of the others: that would provide a metric with less accuracy, less precision and greater error.
But you may want some average temperature for the region over the two days. The mean, median and mode are each a valid average. Do you want the mean, median or mode of the grouped data? Or do you want some other average (e.g. a weighted mean)? There are an infinite number of possible averages.
The mean, median and mode are each a valid average.
Which – if any – of these averages is an appropriate indicator of whatever you wanted an average to indicate? Choose the wrong one and you get an increase in error because the wrong definition of ‘average’ was applied. Choose the right one and you get an ‘average’ which is a better indicator of what you wanted than any one of the measurements: it has more accuracy, more precision, and less error.
Hence, as I keep saying:
There can be NO meaningful determination of the error in a datum, and there cannot be a calibration standard, for a metric which does not have an agreed definition.
This point is true for every metric including global temperature which has no agreed definition.
Richard
Gail,
Your question goes to the project I’m working on, which is a look at the ARGO data.
The temperature of an object is supposed to be a measure of the average — there’s the “a” word — kinetic energy of the particles in that object.
For a homogeneous object (such as a well-mixed beaker) this is trivial: stick a thermometer in. Repeat if needed. Average.
For a heterogeneous object (such as the atmosphere) this is not trivial. The thermometer reading at each point gives a local measure of the average kinetic energy of the particles in the vicinity of that thermometer. So averaging temperatures in the numeric sense is really an incorrect procedure. Temperatures do not add to give a physically meaningful quantity. Instead, what we really want to do is to add up the total heat represented by each of the measurements, and present that as “total heat content.” I believe this is the approach taken by BEST (“Berkeley Earth Temperature Averaging Process”, p. 2)
The total heat content divided by an “average heat capacity” would then give an average temperature (actually, BEST divides by area, which is different than I would do it. But they have a lot more letters behind their names, so I don’t say this to criticize).
But now: how do deal with error in measurements? One approach — the one my project takes — is to divide up the atmosphere or ocean into grids, assume as a simplification that the grid elements are each homogeneous, and then use the measurements within the grids as *samples* of the temperature of each grid. Compute heat content for each sample, then average. The smaller the grid elements, the more accurate this procedure will be.
This gives a kind of — *kind of* — Monte Carlo estimate of the total heat content.
The real issue, and the one my project focuses on, is computing variances for this quantity.
This is a long-winded way to answer your question. If you group temps over a larger area, you get more samples — your error of the mean goes down. But if you group temps over a larger area, you get more error in the total heat estimate for that cell. It becomes less true that each sample represents the average kinetic energy of a homogeneous cell grid.
FTA: “One important corollary of this is that the final error estimate for a given month’s anomaly cannot be smaller than the error in the climatology for that month.”
Yep. Statistical sleight of hand has been SOP during the whole fiasco. You cannot reduce error below the long term error using short term data. Most basic statistical tests assume independence of measurements at some level. When the data are not genuinely independent, those tests produce garbage.
I just followed LdB’s link to Wikipedia’s fable about temperature where they say this:
This is utterly impossible. Countervailing heat flows between two bodies with the same temperature. Neither is aware of the temperature of the other. No intelligence is expressed in radiation as the to energy state of objects around radiating objects. They don’t swap tales of heat exchange. They just radiate, and in turn, receive radiation. The rate of exchange is entirely dependent on the relative energy levels of the radiating objects.
Jeff Cagle,
I get what you mean about kinetic energy. (I assume you have to take into account the humidity)
How ever if I am looking for the true value of the temperature in a mix batch.
#1. There IS a true value.
#2. I need to make sure the instruments I use to take the test data are calibrated
#3. The more data points I take the better chance I have to ‘approach’ the true value.
#4. The more instruments I use and the more observers I uses the larger the error.
However if I recall from my very ancient stat courses, if I have a series of numbers
71
72
71
73
70
71
Then I can estimate the true value as 71.3 but I am not allowed to call it 71.3333.
If I am looking at collecting those numbers over a long time period I also have to take into account the systematic error. The stirring motor having a bearing wearing and causing the value 73, an observer having a parallax error problem, instrument drift and such.
However once you make it a dynamic process, with chemicals being added and finished product getting removed the statistics starts to get a lot more interesting (I worked for a chemical company doing continuous process)
You can get a better estimate of the true value using statistics however it can not make up numbers out of thin air and it can’t get rid of systematic error like UHI and more important you are not allowed to go changing the historic data.
I don’t know why poor S Mosher gets such a bad rap around here. He comes here, into the Lion’s Den if you will, and never patronises nor insults others.
I could be wrong here, but when he talks about removing seasonal trend I think he means accounting for cyclical changes in the global average through the year (wavelength = 6months).
I’m not sure why Willis is spending time on this. In MHO it’s rather a moot point and I think the post is a distraction.
Steven Mosher
Why using Kriging do you not integrate the spatial uncertainty into your estimates. Reading the methodology it appears that you use an indirect method to solving the Kriging system which may not give you the Kriging Variance. This may be more efficient but you lose one of the main benefits of the technique. The argument that clustering is taken care of in the estimates is correct, but there are other methods that can be used to do this also, but one of the main reasons for using Kriging is that it gives you an estimate of uncertainty that is derived from the same statistic that is used to produce your estimate.
Sorry to my post August 19, 2013 at 4:13 pm: 1/2 wavelength = 6 months
Gail Combs says:
August 19, 2013 at 12:26 pm
I think when you start looking at the actual measured data, the answer this question raises buts the surface record set in a poor light.
For instance Merrill Field in Anchorage was mentioned above. NCDC summary of days lists 2 different stations for Merrill Field. Here’s a swim lane chart of the data (sorry for the format):
STATION_NUMBER 702735 702735
WBAN 99999 26409
LAT 61200 61217
LON -149833 -149855
ELEV +00420 +00418
NAME MERRILL FLD MERRILL FLD
CTRY US US
YR_1940 0 0
YR_1941 0 0
YR_1942 0 0
YR_1943 0 0
YR_1944 0 0
YR_1945 0 364
YR_1946 0 365
YR_1947 0 365
YR_1948 0 366
YR_1949 0 365
YR_1950 0 365
YR_1951 0 365
YR_1952 0 366
YR_1953 0 305
YR_1954 0 0
YR_1955 0 0
YR_1956 0 0
YR_1957 0 0
YR_1958 0 0
YR_1959 0 0
YR_1960 0 0
YR_1961 0 0
YR_1962 0 0
YR_1963 0 0
YR_1964 0 0
YR_1965 0 0
YR_1966 0 0
YR_1967 0 0
YR_1968 0 0
YR_1969 0 0
YR_1970 0 0
YR_1971 0 0
YR_1972 0 0
YR_1973 0 0
YR_1974 0 0
YR_1975 0 13
YR_1976 0 366
YR_1977 0 365
YR_1978 0 365
YR_1979 0 365
YR_1980 0 366
YR_1981 0 362
YR_1982 0 365
YR_1983 0 365
YR_1984 0 366
YR_1985 0 362
YR_1986 0 365
YR_1987 0 365
YR_1988 0 364
YR_1989 0 365
YR_1990 0 365
YR_1991 0 365
YR_1992 0 364
YR_1993 0 365
YR_1994 0 362
YR_1995 0 365
YR_1996 0 366
YR_1997 0 365
YR_1998 0 365
YR_1999 0 360
YR_2000 365 0
YR_2001 365 0
YR_2002 365 0
YR_2003 365 0
YR_2004 361 0
YR_2005 0 364
YR_2006 0 365
YR_2007 0 365
YR_2008 0 366
YR_2009 0 365
YR_2010 0 365
YR_2011 0 365
YR_2012 0 366
The count is the number of daily samples by year for each of the 2 stations.
Here is the stations in the Anchorage area and the sample count for them.
STATION_NUMBER 702730 999999 702725 702725 702735 702735 997381 702720 999999 702720 702736 702700 702746
WBAN 26451 26451 26491 99999 99999 26409 99999 99999 26452 26401 99999 99999 26497
LAT 61175 61175 61179 61183 61200 61217 61233 61250 61250 61253 61267 61267 61416
LON -149993 -149993 -149961 -149967 -149833 -149855 -149883 -149800 -149800 -149794 -149650 -149650 -149507
ELEV +00402 +00402 +00402 +00220 +00420 +00418 +00030 +00590 +00631 +00649 +01150 +01150 +00293
YR_1940 0 0 0 0 0 0 0 0 0 0 0 0 0
YR_1941 0 0 0 0 0 0 0 294 0 0 0 0 0
YR_1942 0 0 0 0 0 0 0 365 0 0 0 0 0
YR_1943 0 0 0 0 0 0 0 365 0 0 0 0 0
YR_1944 0 0 0 0 0 0 0 366 0 0 0 0 0
YR_1945 0 0 0 0 0 364 0 365 0 0 0 0 0
YR_1946 0 0 0 0 0 365 0 365 0 0 0 0 0
YR_1947 0 0 0 0 0 365 0 365 0 0 0 0 0
YR_1948 0 0 0 0 0 366 0 366 0 0 0 0 0
YR_1949 0 0 0 0 0 365 0 365 0 0 0 0 0
YR_1950 0 0 0 0 0 365 0 365 0 0 0 0 0
YR_1951 0 0 0 0 0 365 0 365 0 0 0 0 0
YR_1952 0 0 0 0 0 366 0 366 0 0 0 0 0
YR_1953 0 60 0 0 0 305 0 365 70 0 0 0 0
YR_1954 0 365 0 0 0 0 0 365 365 0 0 0 0
YR_1955 0 365 0 0 0 0 0 365 365 0 0 0 0
YR_1956 0 366 0 0 0 0 0 366 104 0 0 0 0
YR_1957 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1958 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1959 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1960 0 366 0 0 0 0 0 366 0 0 0 0 0
YR_1961 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1962 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1963 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1964 0 364 0 0 0 0 0 366 0 0 0 0 0
YR_1965 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1966 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1967 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1968 0 366 0 0 0 0 0 366 0 0 0 0 0
YR_1969 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1970 0 365 0 0 0 0 0 365 0 0 0 0 0
YR_1971 0 365 0 0 0 0 0 0 0 0 0 0 0
YR_1972 0 366 0 0 0 0 0 0 0 0 0 0 0
YR_1973 364 0 0 0 0 0 0 364 0 0 0 30 0
YR_1974 365 0 0 0 0 0 0 365 0 0 0 355 0
YR_1975 365 0 0 0 0 13 0 365 0 0 0 355 0
YR_1976 366 0 0 0 0 366 0 366 0 0 19 198 0
YR_1977 365 0 0 0 0 365 0 365 0 0 232 257 0
YR_1978 365 0 0 0 0 365 0 365 0 0 277 281 0
YR_1979 365 0 0 0 0 365 0 365 0 0 325 325 0
YR_1980 366 0 0 0 0 366 0 366 0 0 358 358 0
YR_1981 365 0 0 0 0 362 0 365 0 0 336 336 0
YR_1982 365 0 0 0 0 365 0 365 0 0 359 359 0
YR_1983 365 0 0 0 0 365 0 365 0 0 194 357 0
YR_1984 366 0 0 0 0 366 0 366 0 0 0 366 0
YR_1985 365 0 0 0 0 362 0 365 0 0 0 365 0
YR_1986 365 0 0 0 0 365 0 365 0 0 0 365 0
YR_1987 365 0 0 0 0 365 0 365 0 0 0 365 0
YR_1988 366 0 0 0 0 364 0 366 0 0 0 366 0
YR_1989 365 0 0 0 0 365 0 365 0 0 0 365 0
YR_1990 365 0 0 0 0 365 0 365 0 0 0 365 0
YR_1991 365 0 0 0 0 365 0 365 0 0 0 365 0
YR_1992 366 0 0 0 0 364 0 366 0 0 0 366 0
YR_1993 365 0 0 8 0 365 0 363 0 0 0 355 0
YR_1994 365 0 0 332 0 362 0 365 0 0 0 269 0
YR_1995 365 0 0 313 0 365 0 365 0 0 0 162 0
YR_1996 366 0 0 323 0 366 0 359 0 0 0 0 0
YR_1997 365 0 0 316 0 365 0 365 0 0 0 0 0
YR_1998 365 0 0 339 0 365 0 365 0 0 0 0 0
YR_1999 365 0 0 363 0 360 0 363 0 0 0 0 0
YR_2000 366 0 0 366 365 0 0 366 0 0 0 0 0
YR_2001 365 0 0 365 365 0 0 365 0 0 0 0 0
YR_2002 365 0 362 0 365 0 0 362 0 0 0 0 0
YR_2003 365 0 365 0 365 0 0 365 0 0 0 0 0
YR_2004 366 0 358 0 361 0 0 366 0 0 0 0 0
YR_2005 365 0 365 0 0 364 184 365 0 0 0 0 0
YR_2006 365 0 365 0 0 365 356 0 0 364 0 0 365
YR_2007 365 0 365 0 0 365 365 0 0 365 0 0 362
YR_2008 366 0 366 0 0 366 366 0 0 366 0 0 362
YR_2009 365 0 365 0 0 365 347 0 0 365 0 0 352
YR_2010 365 0 365 0 0 365 355 0 0 365 0 0 359
YR_2011 365 0 365 0 0 365 365 0 0 365 0 0 360
YR_2012 366 0 366 0 0 366 364 0 0 366 0 0 336
Long-term systematic bias due to UHI and land-use changes, rather than sampling variablity of “anomalies,” is what afflicts BEST’s results the most. “Kriging” spreads that bias spatially, resulting in the highest trends and “scalpeling” produces the lowest low-frequency spectral content in their manufactured time series.