
By Walter Dnes – Edited by Just The Facts
Investopedia defines “Leading Indicator” thusly…
A measurable economic factor that changes before the economy starts to follow a particular pattern or trend. Leading indicators are used to predict changes in the economy, but are not always accurate.
Economics is not the only area where a leading indicator is nice to have. A leading indicator that could predict in February, whether this calendar year’s temperature anomaly will be warmer or colder than the previous calendar year’s anomaly would also be nice to have. I believe that I’ve stumbled across exactly that. Using data from 1979 onwards, the rule goes like so…
- If this year’s January anomaly is warmer than last year’s January anomaly, then this year’s annual anomaly will likely be warmer than last year’s annual anomaly.
- If this year’s January anomaly is colder than last year’s January anomaly, then this year’s annual anomaly will likely be colder than last year’s annual anomaly.
This is a “qualitative” forecast. It doesn’t forecast a number, but rather a boundary, i.e. greater than or less than a specific number. I don’t have an explanation for why it works. Think of it as the climatological equivalent of “technical analysis”; i.e. event X is usually followed by event Y, leaving to others to figure out the underlying “fundamentals”, i.e. physical theory. I’ve named it the “January Leading Indicator”, abbreviated as “JLI” (which some people will probably pronounce as “July”). The JLI has been tested on the following 6 data sets, GISS, HadCRUT3, HadCRUT4, UAH5.6, RSS and NOAA
In this post I will reference this zipped GISS monthly anomaly text file and this spreadsheet. Note that one of the tabs in the spreadsheet is labelled “documentation”. Please read that tab first if you download the spreadsheet and have any questions about it.
The claim of the JLI would arouse skepticism anywhere, and doubly so in a forum full of skeptics. So let’s first look at one data set, and count the hits and misses manually, to verify the algorithm. The GISS text file has to be reformatted before importing into a spreadsheet, but it is optimal for direct viewing by humans. The data contained within the GISS text file is abstracted below.
Note: GISS numbers are the temperature anomaly, multiplied by 100, and shown as integers. Divide by 100 to get the actual anomaly. E.g. “43” represents an anomaly of 43/100=0.43 Celsius degrees. “7” represents an anomaly of 7/100=0.07 Celsius degrees.
- The first 2 columns on the left of the GISS text file are year and January anomaly * 100.
- The column after “Dec” (labelled “J-D”) is the January-December anomaly * 100
The verification process is as follows:
- Count all the years where the current year’s January anomaly is warmer than the previous year’s January anomaly. Add a 1 in the Counter column for each such year.
- For each such year, we count all where the year’s annual anomaly is warmer than the previous year’s annual anomaly and add a 1 in the Hit column for each such year.
| Jan(current) > Jan(previous) | J-D(current) > J-D(previous) | ||||
| Year | Counter | Compare | Hit | Compare | Comment |
| 1980 | 1 | 25 > 10 | 1 | 23 > 12 | |
| 1981 | 1 | 52 > 25 | 1 | 28 > 23 | |
| 1983 | 1 | 49 > 4 | 1 | 27 > 9 | |
| 1986 | 1 | 25 > 19 | 1 | 15 > 8 | |
| 1987 | 1 | 30 > 25 | 1 | 29 > 15 | |
| 1988 | 1 | 53 > 30 | 1 | 35 > 29 | |
| 1990 | 1 | 35 > 11 | 1 | 39 > 24 | |
| 1991 | 1 | 38 > 35 | 0 | 38 < 39 | Fail |
| 1992 | 1 | 42 > 38 | 0 | 19 < 38 | Fail |
| 1995 | 1 | 49 > 27 | 1 | 43 > 29 | |
| 1997 | 1 | 31 > 25 | 1 | 46 > 33 | |
| 1998 | 1 | 60 > 31 | 1 | 62 > 46 | |
| 2001 | 1 | 42 > 23 | 1 | 53 > 41 | |
| 2002 | 1 | 72 > 42 | 1 | 62 > 53 | |
| 2003 | 1 | 73 > 72 | 0 | 61 < 62 | Fail |
| 2005 | 1 | 69 > 57 | 1 | 66 > 52 | |
| 2007 | 1 | 94 > 53 | 1 | 63 > 60 | |
| 2009 | 1 | 57 > 23 | 1 | 60 > 49 | |
| 2010 | 1 | 66 > 57 | 1 | 67 > 60 | |
| 2013 | 1 | 63 > 39 | 1 | 61 > 58 | |
| Predicted 20 > previous year | Actual 17 > previous year | ||||
Of 20 candidates flagged (Jan(current) > Jan(previous)), 17 are correct (i.e. J-D(current) > J-D(previous)). That’s 85% accuracy for the qualitative annual anomaly forecast on the GISS data set where the current January is warmer than the previous January.
And now for the years where January is colder than the previous January. The procedure is virtually identical, except that we count all where the year’s annual anomaly is colder than the previous year’s annual anomaly and add a 1 in the Hit column for each such year.
| Jan(current) < Jan(previous) | J-D(current) < J-D(previous) | ||||
| Year | Counter | Compare | Hit | Compare | Comment |
| 1982 | 1 | 4 < 52 | 1 | 9 < 28 | |
| 1984 | 1 | 26 < 49 | 1 | 12 < 27 | |
| 1985 | 1 | 19 < 26 | 1 | 8 < 12 | |
| 1989 | 1 | 11 < 53 | 1 | 24 < 35 | |
| 1993 | 1 | 34 < 42 | 0 | 21 > 19 | Fail |
| 1994 | 1 | 27 < 34 | 0 | 29 > 21 | Fail |
| 1996 | 1 | 25 < 49 | 1 | 33 < 43 | |
| 1999 | 1 | 48 < 60 | 1 | 41 < 62 | |
| 2000 | 1 | 23 < 48 | 1 | 41 < 41 | 0.406 < 0.407 |
| 2004 | 1 | 57 < 73 | 1 | 52 < 61 | |
| 2006 | 1 | 53 < 69 | 1 | 60 < 66 | |
| 2008 | 1 | 23 < 94 | 1 | 49 < 63 | |
| 2011 | 1 | 46 < 66 | 1 | 55 < 67 | |
| 2012 | 1 | 39 < 46 | 0 | 58 > 55 | Fail |
| Predicted 14 < previous year | Actual 11 < previous year | ||||
Of 14 candidates flagged (Jan(current) < Jan(previous)), 11 are correct (i.e. J-D(current) < J-D(previous)). That’s 79% accuracy for the qualitative annual anomaly forecast on the GISS data set where the current January is colder than the previous January. Note that the 1999 annual anomaly is 0.407, and the 2000 annual anomaly is 0.406, when calculated to 3 decimal places. The GISS text file only shows 2 (implied) decimal places.
The scatter graph at this head of this article compares the January and annual GISS anomalies for visual reference.
Now for a verification comparison amongst the various data sets, from the spreadsheet referenced above. First, all years during the satellite era, which were forecast to be warmer than the previous year
| Data set | Had3 | Had4 | GISS | UAH5.6 | RSS | NOAA |
| Ann > previous | 16 | 15 | 17 | 18 | 18 | 15 |
| Jan > previous | 19 | 18 | 20 | 21 | 20 | 18 |
| Accuracy | 0.84 | 0.83 | 0.85 | 0.86 | 0.90 | 0.83 |
Next, all years during the satellite era, which were forecast to be colder than the previous year
| Data set | Had3 | Had4 | GISS | UAH5.6 | RSS | NOAA |
| Ann < previous | 11 | 11 | 11 | 11 | 11 | 11 |
| Jan < previous | 15 | 16 | 14 | 13 | 14 | 16 |
| Accuracy | 0.73 | 0.69 | 0.79 | 0.85 | 0.79 | 0.69 |
The following are scatter graph comparing the January and annual anomalies for the other 5 data sets:
HadCRUT3

HadCRUT4

UAH 5.6

RSS

NOAA

The forecast methodology had problems during the Pinatubo years, 1991 and 1992. And 1993 also had problems, because the algorithm compares with the previous year, in this case Pinatubo-influenced 1992. The breakdowns were…
- For 1991 all 6 data sets were forecast to be above their 1990 values. The 2 satellite data sets (UAH and RSS) were above their 1990 values, but the 4 surface-based data sets were below their 1990 values
- For 1992 the 4 surface-based data sets (HadCRUT3, HadCRUT4, GISS, and NCDC/NOAA) were forecast to be above their 1991 values, but were below
- The 1993 forecast was a total bust. All 6 data sets were forecast to be below their 1992 values, but all finished the year above
In summary, during the 3 years 1991/1992/1993, there were 6*3=18 over/under forecasts, of which 14 were wrong. In plain English, if a Pinatubo-like volcano dumps a lot of sulfur dioxide (SO2) into the stratosphere, the JLI will not be usable for the next 2 or 3 years, i.e.:
“The most significant climate impacts from volcanic injections into the stratosphere come from the conversion of sulfur dioxide to sulfuric acid, which condenses rapidly in the stratosphere to form fine sulfate aerosols. The aerosols increase the reflection of radiation from the Sun back into space, cooling the Earth’s lower atmosphere or troposphere. Several eruptions during the past century have caused a decline in the average temperature at the Earth’s surface of up to half a degree (Fahrenheit scale) for periods of one to three years. The climactic eruption of Mount Pinatubo on June 15, 1991, was one of the largest eruptions of the twentieth century and injected a 20-million ton (metric scale) sulfur dioxide cloud into the stratosphere at an altitude of more than 20 miles. The Pinatubo cloud was the largest sulfur dioxide cloud ever observed in the stratosphere since the beginning of such observations by satellites in 1978. It caused what is believed to be the largest aerosol disturbance of the stratosphere in the twentieth century, though probably smaller than the disturbances from eruptions of Krakatau in 1883 and Tambora in 1815. Consequently, it was a standout in its climate impact and cooled the Earth’s surface for three years following the eruption, by as much as 1.3 degrees at the height of the impact.” USGS
For comparison, here are the scores with the Pinatubo-affected years (1991/1992/1993) removed. First, where the years were forecast to be warmer than the previous year
| Data set | Had3 | Had4 | GISS | UAH5.6 | RSS | NOAA |
| Ann > previous | 16 | 15 | 17 | 17 | 17 | 15 |
| Jan > previous | 17 | 16 | 18 | 20 | 19 | 16 |
| Accuracy | 0.94 | 0.94 | 0.94 | 0.85 | 0.89 | 0.94 |
And for years where the anomaly was forecast to be below the previous year
| Data set | Had3 | Had4 | GISS | UAH5.6 | RSS | NOAA |
| Ann < previous | 11 | 11 | 11 | 10 | 10 | 11 |
| Jan < previous | 14 | 15 | 13 | 11 | 12 | 15 |
| Accuracy | 0.79 | 0.73 | 0.85 | 0.91 | 0.83 | 0.73 |
Given the existence of January and annual data values, it’s possible to do linear regressions and even quantitative forecasts for the current calendar year’s annual anomaly. With the slope and y-intercept available, one merely has to wait for the January data to arrive in February and run the basic “y = mx + b” equation. The correlation is approximately 0.79 for the surface data sets, and 0.87 for the satellite data sets, after excluding the Pinatubo-affected years (1991 and 1992).
There will probably be a follow-up article a month from now, when all the January data is in, and forecasts can be made using the JLI. Note that data downloaded in February will be used. NOAA and GISS use a missing-data algorithm which results in minor changes for most monthly anomalies, every month, all the way back to day 1, i.e. January 1880. The monthly changes are generally small, but in borderline cases, the changes may affect rankings and over/under comparisons.
The discovery of the JLI was a fluke based on a hunch. One can only wonder what other connections could be discovered with serious “data-mining” efforts.
In Contiguous US, January temperatures have been declining for 15 years at -1.49F/decade . So have the winter temperatures at -1.57F/decade and the annual at -.16F/decade.The reason the annual is also declining is that 7 out of 12 months are also declining. The fall temperatures are declining and spring as well but only APRIL and MAY. .Only the summer temperatures are still increasing . So with North America cooling , It is probable that if janauary is cold, the rest of the year is cold as well since most months are cooling due to Northern HEMISPHERE oceans which have been temperature flat for 10 years and are now cooling year round since about 2005..
Werner Brozek says:
February 1, 2014 at 5:32 pm
We have all heard of numerous adjustments, but I think that this is one time that the adjustments are not relevant. After all, we are not interested in the rate of warming over the last several decades but how the January anomalies predict annual anomalies. And any adjustments that are made would affect January as much as the annual anomaly more or less equally.
_____________________________________________________________________
Only if the adjustment system isn’t being mucked with on the whim of fitting the curves to a predetermined end state.
In this case verify before using let alone trusting.
Verity Jones says:
February 2, 2014 at 5:12 am
I suspect this JTI works because January is the month in which we see how much the Northern Hemisphere has cooled from the previous summer warmth and overall the NH has tended to warm more than the Southern Hemisphere, having therefore a greater effect on the global average anomaly.
Verity check out this data posted above by Nick Stokes:
http://www.moyhu.org.s3.amazonaws.com/misc/janlead.txt
January is near the worst, not the best, correlated with the year average.
But it has the advantage of beginning of the year.
@ur momisuglyphlogiston
Now that I’ve looked at Nick Stokes data, and thought about it, the correlations are not that surprising. The correlations are highest in June-Oct. The seasonal year ends in Nov. December’s data will usually be carried into the following year (Winter = DJF) so December is the first month, with the expected lowest correlation.
I favor “chicken droppings”, myself. The whiter they get in January, the cooler the following year will be (except in years when you’ve overdone it with the grit).
phlogiston says:
> February 2, 2014 at 7:49 am
> Verity check out this data posted above by Nick Stokes:
> http://www.moyhu.org.s3.amazonaws.com/misc/janlead.txt
I believe that the difference between these numbers and Willis Eschenbach’s numbers is that…
* Nick’s numbers are Month N, versus the same year’s January through December
* Willis’ numbers are Month N, versus the 12 month series, Month N this year through Month (N – 1) next year.
Re: Nick’s data…
> January is near the worst, not the best, correlated
> with the year average. But it has the advantage of
> beginning of the year.
Using December as a leading indicator for the current calendar year is pointless. Once the December data is in, you already have the current calendar year’s data. Similarly, using July data means that you’re only “predicting” 5 months forward versus January, which “predicts” 11 months forward. The fact that shorter range predictions have higher accuracy is not a shock.
Is it proper to use a single line best fit rather than segmented or curved?
Taking in account the valid points raised here about statistical bias etc, it seem to me that if this simple observation holds good then we do have a short/medium term leading indicator. Which is more than most “models” can manage. It ought to be interesting to see how this pans out.
walterdnes says:
February 2, 2014 at 12:11 am
You seem to not understand what I am saying, likely my fault. I am saying that the January results are EXPECTED when the data has the shape and form of the temperature data. You say that you “don’t have a physical explanation” … why would you need a “physical explanation” for a random occurrence?
That’s like putting up a post of WUWTsaying “WOW, I just grabbed a coin, flipped three heads in a row, and I have no physical explanation for why that is.” …
It has a “physical explanation”. It is an EXPECTED RANDOM OCCURRENCE. Not only is the mean value expected, but the range of the values is also expected. Therefore, the high January “versus the lower numbers for the other months” is also an EXPECTED RANDOM OCCURRENCE. My Monte Carlo analysis finds a variation in the monthly results which almost exactly matches the range of values we find in the GISS LOTI data, including the increased range when you move from the full dataset to the shorter satellite era data.
Next, you say “it could be entirely due to chance. Maybe it is” … but you are still going to use it for forecasts anyways.
Hey, nobody can stop you. But nobody’s going to be impressed if your “forecasts” come true. You see, we expect such a forecast to come true, it’s inherent in the data …
That’s fine … but then you left out the full analysis of the individual datasets.
Walter, you seem like one of the good guys, but you are chasing a total chimera here. The result that seems to impress and surprise you so much occurs in “red noise” pseudodata … you’re looking at expected results, my friend, there’s nothing there.
Truly, my friend, you need to take up the study of the Monte Carlo analysis, along with ARIMA datasets. Highly autocorrelated datasets like the temperature data have funny properties, and you’ve stumbled across one of them.
As a result, your observation about January being a “leading indicator” is no more surprising than finding a large number of warm years in the most recent decade of the temperature record of a planet which has been gradually warming for a few hundred years … which is to say, not surprising in the slightest.
w.
Greg Goodman says:
February 2, 2014 at 12:10 am
Greg, while in other cases you might be right, in this case he’s arbitrarily removed about three-quarters of the GISS LOTI data.
Are you seriously arguing that there is a “good, accepted, physical reason” why three quarters of the data should have a different behavior??? Or the same for the data around the time of Pinatubo?
I ask in part because Walter himself has said that he does not have “a physical explanation” for the claimed correlation. Given that there is no “good, accepted physical reason” for the putative phenomenon itself … then how can we possibly know what might be a “good, accepted physical reason” that some data should be excluded?
Best regards,
w.
As usual Mr Eschenbach uses maths to show something, in this case that he can reproduce the same results as the OP.
Wwhen his results are no where near the OPs, unless of course he can’t see the difference between
[You will find it better to use “pre” within angle brackets to format tables, but don’t bother to copy-and-paste all 10 digits of a std dvt when the data is only 2 digits. 8<0 Mod]
Greg Goodman says:
February 2, 2014 at 12:35 am
Greg, the null hypothesis is that the behavior pointed out by Walter is expected. That is to say, it is an inherent feature of the nature of the dataset itself. I have shown that indeed, this is the case. We find the same thing in “red noise”. So Walter has not falsified the null hypothesis …
Does that make it “useless”? Well, that depends on how you define “useful”. For example, if you want to predict tomorrow’s weather, your best guess is today’s weather … is that a useful prediction?
Generally, in climate science, this is not seen as “useful”. Instead, it just forms the baseline that any practical forecasting system has to beat. If you can’t do better than saying tomorrow will be like today, your forecast sucks … but that doesn’t make “tomorrow will be like today” into a useful forecast.
If it were “useful”, as you claim, then there would be weather forecasters out there every day forecasting that tomorrow will be like today … funny, I don’t find them. Nor do I find people forecasting the year based on January, and for the same reason.
That’s not a forecast. That’s the predictability that is inherent in the data—so that’s not a prediction of any kind.
Instead, that’s merely the baseline that a real forecast has to beat in order to be of any value.
w.
A C Osborn says:
February 2, 2014 at 10:07 am
I have no idea why you are comparing those results, A.C. The numbers in the right column are from my analysis of the satellite era GISS LOTI data … and the numbers on the left are from my analysis of the full GISS LOTI data. Neither of them are “the OPs” results.
It appears you think you are comparing my results to Walters, and getting all snippy about the fact that they don’t match … bad news. You’re comparing two things that we DON’T EXPECT TO BE THE SAME. You are comparing MY analysis of the full data set with MY analysis of a quarter of it, and you haven’t compared either one to the OPs results.
Sorry, A.C., but your comment is a colossal fail. You don’t even seem to notice what you are comparing … but of course, in your inimitable way, you attempt to use your totally bogus results to get all nasty about me and claim that I don’t know what I’m doing.
Nice try …
w.
Willis Eschenbach says:
February 2, 2014 at 9:48 am
“As a result, your observation about January being a “leading indicator” is no more surprising than finding a large number of warm years in the most recent decade of the temperature record of a planet which has been gradually warming for a few hundred years … which is to say, not surprising in the slightest.”
W, you missed the point. Not surprised you got lost in the weeds of your own analysis. Walterdnes does have something interesting. It isn’t earth shattering, it might not hold true, but it seems to have some merit.
See Hoser says:
February 2, 2014 at 12:53 am
Walterdnes, strap on a pair, and don’t let W walk all over you with too many paragraphs, and too much so-called analysis.
As usual MR Eschenbach you are having Reading Difficulties, just like the last time I communicated with you.
Let me correct that for you the right hand column comes from here
walterdnes says:
February 2, 2014 at 12:11 am
Willis Eschenbach says:
> February 1, 2014 at 10:49 pm
> If, on the other hand, we use only the satellite era data we get
>
> Jan, 83%
> Feb, 59%
> Mar, 62%
> Apr, 59%
> May, 50%
> Jun, 62%
> Jul, 44%
> Aug, 50%
> Sep, 44%
> Oct, 47%
> Nov, 71%
> Dec, 68%
> AVERAGE, 58%
> 95% CI 35% to 81%
As we say in the UK “He should have gone to Spec Savers.
This time I do apologise, you are correct, they are you numbers.
But they do show exactly what the OP is saying, that for that period January stand out like a sore thumb.
Your 0.83 compares very well with his overall results, but with Volcanic activity taken out he gets
For comparison, here are the scores with the Pinatubo-affected years (1991/1992/1993) removed. First, where the years were forecast to be warmer than the previous year
And for years where the anomaly was forecast to be below the previous year
Which are even more impressive when compared to the UK Met Office forecasts, which are never correct and have been shown to be worse than a coin toss or Dart throw.
So how do you explain those results compared to your Monte Carlo analysis?
I’m not sure if this has been mentioned as I didn’t read all of the comments yet, but there is likely a much easier way to find this trend. That is, solely looking at El Nino/La Nina
[IMG]http://i57.tinypic.com/w1to9f.jpg[/IMG]
The first column is your January “Warmer/Colder” prediction, with your misses being highlighted yellow. The years data that follows is color-coded to your prediction to make for easy comparison with my prediction. The data I used is the Oceanic Niño Index, as available here
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
The white calculation column at the end is the Oct/Nov/Dec change between the previous 2 years (eg, 1981 OND – 1980 OND = my 1982 Prediction). My prediction is based solely off that year over year change, and can be seen in the color-coded year column directly before it.
1985 was a toss up for my prediction method as there was a perfect 0 trend between the prior to years. Otherwise 1994 & 2012 are the only years where my method varies from your method – and my method predicted those years correctly, yours did not.
I continue your trend of missing the Pinatubo years between 1991-1993 (I came to the same outcome you did) and otherwise only miss on 2003, a year you similarly missed. (wondering if it is an extremely high volcanic activity year)
So ignoring the Pinatubo years of 1991-1993 and removing my non-prediction for 1985, I was able to correctly predict 27 of 28 years. I can also already make a prediction for 2014 – Colder
Now, my 96% looks fantastic at first glance, but (like yours) maybe someone would like to check these, what would be my 1952-1982 predictions
[IMG]http://i59.tinypic.com/2vwcxe1.jpg[/IMG]
Unfortunately, I can already see I missed on 1981 (and ironically, you were correct)
And all of that said, maybe I failed to use the most predictive of trend patterns from the data I referenced. It is also possible my findings are a complete fluke. Oh, but if anyone uses my quick initial thoughts for an actual predictive model, please feel free to give me a nod 😉
One would expect that in a variable with twelve equal components knowledge of the direction of change of the first component would be correlated with the change of all because Jan is not statistically independent of the entire year. And if that’s the case knowledge of two of the components should increase accuracy and so on.
It is my foggy understanding that generally this sort of thing is considered an error in statistical reasoning. Independent variables are used to build predictive models because variables where the predictive variable and the outcome are not independent tend to confound the analysis.
But I defer to.the statisticians in the audience
I did not read your entire post or the comments, but I suspect the “homogenization” adjustments are the source of this correlation. If bad actors wanted to show an erroneous up trend, then they would push the oldest January temperatures down, along with the entire oldest year. They would push more recent temperatures up, along with the entire year. You get the picture. Perhaps your analysis is more evidence that the fix is in on our temperature record.
I haven’t done it … and probably won’t do it …. but I suspect that if you line up the January’s with ENSO data, you are going to see a close relationship, given that EL Ninos and La Nina usually take shape right around January.
ALSO .. I think you should omit 1992-1995, as those were the years where a Volcano was influencing Climate. .. thus, it is no suprise that your hypothesis fails in those years.
Latitude says:
February 1, 2014 at 4:49 pm
Note: GISS numbers are…….fake
That was my first thought too thinking at GISS data. Wonder if the same trend is on the unadjusted data?
However the same can be seen for the satellites. As Willis mentioned the data has a warming trend – which should explain why the case was more valid for January warmer = Year warmer (85%) the January colder = Year colder (79%). The opposite should appear in a cooling trend.
As a general forecast it still looks better odds then random, so there might be some other effect, maybe Verity is right:
Verity Jones says:
February 2, 2014 at 5:12 am
I suspect this JTI works because January is the month in which we see how much the Northern Hemisphere has cooled from the previous summer warmth and overall the NH has tended to warm more than the Southern Hemisphere, having therefore a greater effect on the global average anomaly.
walterdnes says:
February 1, 2014 at 6:55 pm
HadCRUT4 0.488
This is the number obtained from WFT or where you add all anomalies and divide by 12. However the HadCRUT4 site itself gives 0.486, presumably by taking into account such things as February having fewer days. It should make no difference most of the time, but in the case of a very close call, it may matter, just like certain ranks for GISS had to be determined by going to the third decimal place. Then there may also be future adjustments, especially for GISS.
Of course we do not know things to 3 decimal places so you may wish to consider certain ranges to be virtual ties, but that is another matter.
DS says:
> February 2, 2014 at 11:51 am
> I’m not sure if this has been mentioned as I didn’t read
> all of the comments yet, but there is likely a much
> easier way to find this trend. That is, solely looking at
> El Nino/La Nina
>
> [IMG]http://i57.tinypic.com/w1to9f.jpg[/IMG]
The more, the merrier. Seriously, if multiple predictive tools are available, let’s use them.
William T Reeves says:
> February 2, 2014 at 12:21 pm
>
> It is my foggy understanding that generally this sort of
> thing is considered an error in statistical reasoning.
> Independent variables are used to build predictive
> models because variables where the predictive variable
> and the outcome are not independent tend to confound
> the analysis.
I’m perfectly happy to accept that the January anomaly is a co-dependant variable with the annual anomaly. I’m not arguing about dependant/independant variables or against auto-correlation. All I’m claiming is that the January anomaly is a good leading indicator of the annual anomaly.
Now I have seen everything, there is to know about the climate. A graph with absolutely no time scale at all.
What a great idea, to plot fictional annual anomalies against fictional monthly anomalies, without regard to any time correspondence between them .
Sort of like plotting the number of plant species in the pre-Cambrian, against the number of animal species in the Plasticine age; well of course I meant to say to plot the plant species anomalies in the pre-Cambrian, against the animal species anomalies in the Plasticine. Really informative graph, giving a scatter plot around a straight line axis (inclined). They should be well correlated out to about one million years.
Wunnerful !!