January 2017 Projected Temperature Anomalies from NCEP/NCAR Data

Guest Post By Walter Dnes

In continuation of my Temperature Anomaly projections, the following are my January projections, as well as last month’s projections for December, to see how well they fared.

Data Set Projected Actual Delta
HadCRUT4 2016/12 +0.648 +0.592 -0.056
HadCRUT4 2017/01 +0.736
GISS 2016/12 +0.83 +0.81 -0.02
GISS 2017/01 +0.95
UAHv6 2016/12 +0.360 +0.243 -0.117
UAHv6 2017/01 +0.462
RSS 2016/12 +0.398 +0.229 -0.169
RSS 2017/01 +0.530
NCEI 2016/12 +0.8028 +0.7895 -0.0133
NCEI 2017/01 +0.9011

The Data Sources

The latest data can be obtained from the following sources

Miscellaneous Notes

At the time of posting all 5 monthly data sets were available through December 2016. The NCEP/NCAR re-analysis data runs 2 days behind real-time. Therefore, real daily data through January 29th is used, and the 30th and 31st are assumed to have the same anomaly as the 29th.

The projections are derived from the previous 12 months of NCEP/NCAR anomalies compared to the same months’ anomalies for each of the 5 data sets. For each of the 5 data sets, the slope() value (“m”) and the intercept() value (“b”) are calculated. Using the current month’s NCEP/NCAR anomaly as “x”, the numbers are plugged into the high-school linear equation “y = mx + b” and “y” is the answer for the specific data set. The entire globe’s data is used for HadCRUT, GISS, and NCEI. For RSS and UAH, subsets of global data are used, to match the latitude coverage provided by the satellites.

A sharp spike in daily anomalies in the last 10 days of January has pushed the projected values to their highest level since April 2016.

The graph immediately below is a plot of recent NCEP/NCAR daily anomalies, versus 1994-2013 base, similar to Nick Stokes’ web page. The second graph is a monthly version, going back to 1997. The trendlines are as follows…

  • Black – The longest line with a negative slope in the daily graph goes back to early July, 2015, as noted in the graph legend. On the monthly graph, it’s August 2015. This is near the start of the El Nino, and nothing to write home about. Reaching back to 2005 or earlier would be a good start.
  • Green – This is the trendline from a local minimum in the slope around late 2004, early 2005. To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.
  • Pink – This is the trendline from a local minimum in the slope from mid-2001. Again, the anomaly needs to drop below this line to start working back to a pause to that date.
  • Red – The trendline back to a local minimum in the slope from late 1997. Again, the anomaly needs to drop below this line to start working back to a pause to that date.

NCEP/NCAR Daily Anomalies:

daily

NCEP/NCAR Monthly Anomalies:

monthly

0 0 votes
Article Rating
26 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Eric Simpson
January 31, 2017 4:20 pm

It’s going to be getting colder…comment image

JCH
January 31, 2017 4:52 pm

Very useful… thanks Walter.

January 31, 2017 4:58 pm

I just don’t understand the point of this–well, more specifically I don’t understand what it is at all. Whose projections? What does it accomplish? What is delta?

Reply to  Shelly Marshall
January 31, 2017 5:24 pm

Delta is a Greek letter. It is often used in mathematics and physics with the meaning of “increment” (positive or negative) or “variation”. Apparently here it means “difference” which is a less common use.
This are the projections of the author. They accomplish nothing. It is some sort of hobby trying to guess what the thermometers are going to say in one or two days.

Dave
Reply to  Javier
January 31, 2017 10:31 pm

In this case, the author uses a simple predictive model based on a temperature data set (a time series) called “NCEP/NCAR”. The aim of the model is to use recent fluctuations in NCEP/NCAR data set to predict the monthly temperature anomaly for the latest month (January 2017 in this case) in several other widely used data sets (RSS, UAH etc.). The reason for doing so is that the NCEP/NCAR is available up until the last 2 days, so can already provide a good indicator of the temperature trend right up to (nearly) the present day, while data for other data sets for this month lags for varying periods (from between a few days for UAH and RSS up to almost a month for HadCRUT4). Because the author publishes this prediction once per month, the observed values for the last month (in this case December 2016) are now available. Hence in addition to making predictions in what the anomalies will be in January 2016 for the other data sets (the left hand column, lower value in each cell of the table) he provides the previous month predictions (upper values, left column), the observed data for the previous month (middle column) and the delta values (right column) i.e. the residual difference between the predicted and observed December 2016 data. The term “Delta” is used because in many fields of science the greek symbol delta (or written longhand) is very widely used to describe the *residuals* for a model fit to data, i.e. the difference between observed and modelled data points.

MarkW
Reply to  Javier
February 1, 2017 10:34 am

In engineering, whenever someone asks for the delta, they are asking for the difference.

Reply to  Javier
February 1, 2017 4:51 pm

@ Peter, Javier, MarkW, Dave, and Adrian Roman. Thank you. “Hobby” and “Numerology”–now I get it! Walter–notwithstanding the fact that I could not understand the point–I do appreciate each and every one of you that attempts to address the issues of climate change. Heaven knows I am at a loss and all your contributions greatly increase my understanding (most of the time). So thank you all–especially when I get lost in the alphas and deltas, and conundrums of a warming planet.

Peter Morgenroth
Reply to  Shelly Marshall
January 31, 2017 6:06 pm

I also fail to understand the point.. To the best of my knowledge it is not possible to measure temperature to 1/1000th degree outside of a laboratory – Measurement to 1/100th degree is at best an educated guess (platinum resistance thermometers measure accurately to 1/10th degree, 1/100th of a degree is uncertain). Why bother reporting accuracy that is entirely unsupportable? And, if climate change (=global warming) is about 1000ths of a degree – what is all the fuss about? All done with smoke and mirrors?

Reply to  Peter Morgenroth
January 31, 2017 11:08 pm

It’s not a measurement, in the physical sense. It’s a numerology.
It’s also not a temperature, despite the ‘units of measure’ (although it’s not a measure) used.
Temperature is intensive and defined in thermodynamics only for thermodynamic equilibrium systems. It can be extended for dynamical equilibrium but one must be very aware of the circumstances (you get for example negative temperature), but once you get into non-equilibrium systems such as Earth and talk about a ‘temperature’ for the system, you are very pseudo scientific. It’s not a parameter for the system. It’s not a physical measure. It’s a pseudo scientific value that cannot measured and it’s obtained by magical numerology. It’s easily provable that with such a value, physical calculations come out wrong. For some systems, very wrong. So wrong that you get cooling for physical warming and warming for physical cooling.

DWR54
Reply to  Peter Morgenroth
February 1, 2017 11:12 am

It’s not a measurement. In the case of the surface data, each monthly value is the average of thousands of thermometer measurements expressed as an anomaly (difference from the long term average).
Generate 30 random whole numbers between 1-10 on a spreadsheet. Take the average of these and deduct it from each whole number. This gives you the anomaly value for each number. Now take the average of these anomaly values. You will find that you have a number that extends to many decimal places.
The temperature producers are doing roughly the same thing with thousands of global surface temperature records. That’s why the final value is expressed with such high precision.

January 31, 2017 6:06 pm

Anomaly scale: 0.001 degree. Who are they kidding?

Editor
Reply to  Nicholas Schroeder
January 31, 2017 6:19 pm

Nicholas and Peter; I’m merely trying to match the 5 data sets as far as their precision goes. And yes, I remember the high-school lecture about the difference between “precision” and “accuracy”.

SocietalNorm
January 31, 2017 6:22 pm

Two thoughts:
1. The NCEP/NCAR plot tracks very well with Dr. Spencer’s Lower Atmosphere Temperature plot for the same time period – except the NCEP/NCAR does not have a large spike corresponding to the 1998 El Nino. (RSS’s spike is lower, but still significant). Does anyone know of any reason for this?
2. The Torino Cobra was a really cool car.
(Eric Simpson’s post)

Dave
Reply to  SocietalNorm
January 31, 2017 10:49 pm

The NCEP/NCAR does have a large spike corresponding to the 1998 El Nino: In the above graph it is visible rising well above the longer term (red) trend line. If the data and trend line were extended back to earlier periods, the spike would be even more obvious.

Auto
Reply to  SocietalNorm
February 1, 2017 1:30 pm

Societal/Norm
The AC Cobra was wa-a-a-ay cooler.
It used our M1, in the 1960s, as a public road testing ground.
A guy I know owns one – displays it for charity.
Auto – never really a motor-head . . . . .

TonyL
January 31, 2017 6:40 pm

@ Walter Dnes
I have some interest in how you calculate your projected values.
I do not understand your explanation. You say:

Using the current month’s NCEP/NCAR anomaly as “x”, the numbers are plugged into the high-school linear equation “y = mx + b” and “y” is the answer for the specific data set.

But “x” is the month, not an anomaly value. As stated it does not seem to make sense.
Could you explain in more detail, how these values are computed?
Thanks.

Nick Stokes
Reply to  TonyL
January 31, 2017 8:32 pm

Walter has a more detailed explanation in the first post in his series here.

Editor
Reply to  TonyL
January 31, 2017 9:14 pm

“x” is the NCEP/NCAR monthly temperature anomaly for the month as shown on Nick Stokes’ web page at https://moyhu.blogspot.ca/p/latest-ice-and-temperature-data.html#NCAR
He goes into detail in his post https://moyhu.blogspot.ca/2014/11/a-new-surface-temperature-index.html about the data set and his calculations…

Then I tried sig995. That’s a reference to the pressure level (I think), but it’s also labelled surface. It goes back to 1948, and seems to be generally more recent. So that is the one I’m describing here.
Both sets are on a 2.5° grid (144×73) and offer daily averages. Of course, for the whole globe at daily resolution, it’s not that easy to define which day you mean. There will be a cut somewhere. Anywhere, I’m just following their definition. sig995 has switched to NETCDF4; I use the R package ncdf4 to unpack. I integrate with cosine weighting. It’s not simple cosine; the nodes are not the centers of the grid cells. In effect, I use cos latitude with trapezoidal integration.

The data is in annual files at ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/ The data files are named like
air.sig995.nnnn.nc where “nnnn” is the calendar year
I use the Gentoo linux packages “sci-libs/netcdf” and “sci-misc/nco” to convert from the binary packed NETCDF format to plaintext flat files. From there I use bash scripts (YES!) to do the number crunching. The first computer language I learned was FORTRAN. I go at the problem in that style. When I first wrote the scripts, I used a spreadsheet to calculate cosine() of the latitude bands to several decimal places. I import those numbers as constants.
There are 73 latitude bands going from 90 North to 90 South. I calculate the average of the (up to) 144 gridpoints at each latitude. That value is multiplied by a cosine weighting for that latitude band. The 73 bands are weighted and summed up to calculate the global temperature (in degerees K) for the day. Rinse/lather/repeat for all days in the current year’s data file. From there the daily temperatures can be averaged by month.
Nick Stokes uses 20-year means from 1994 to 2013 to calculate daily and monthly anomalies against, because that stretch has very complete data.

Editor
Reply to  TonyL
January 31, 2017 9:21 pm

One more detail; for the satellite data sets (UAH and RSS) I skip the corresponding latitude bands at the poles, which the satellites do not cover. This is required to do an “apples-to-apples” comparison of NCEP/NCAR and the satellite data. The global (i.e. entire planet) NCEP/NCAR data is used for comparison against HadCRUT and GISS and NCEI data.

TonyL
Reply to  Walter Dnes
February 1, 2017 4:42 am

@ Nick Stokes
@ Walter Dnes
Thank you both for your replies.
Much appreciated.

Michael Carter
January 31, 2017 9:21 pm

comment image

Michael Carter
Reply to  Michael Carter
January 31, 2017 9:22 pm

Mods – please delete this

Michael Carter
January 31, 2017 10:59 pm

I have just received the updated (to 2016 incl.) New Zealand temperature record from NIWA along with a spreadsheet. These will be published on their web site shortly. I was then able to send the following letter to the Government funded Radio New Zealand
Dear RNZ
Would you please report climate news in a more balanced and quantitive fashion. Science is about numbers, not about qualitative statements that can be misleading.
Temperature data for 2016 is about to come out from NIWA. 2016 shows to be the highest temperature recorded and above average by almost 1 degree C. However, if we look deeper there are some very important aspects to this. Firstly, the ‘average’ relates to the period 1981-2010.
2016 is recorded as being 0.83C above the average, whilst 1998 was 0.80C above the same average. These in fact have to be ranked the same (the difference being “statistically insignificant”) as a margin of error in the calculations is 0.26 C.
If we look at a trend between 1998 and 2016 it is virtually flat i.e. there has been no warming for 20 years.
It is also important to consider that both 1998 and 2016 were influenced by an El Nino event. These both show as an upward spike in temperature. These are usually short-lived and temporary.
If you don’t have a reporter well trained in science please get a professional statistician who can explain this in a balanced, rational fashion.
Climate change policy is about to cost this country a lot of money. The public deserve to be given all the facts. They are not stupid. They will understand the numbers.
Best regards
Michael Carter

Simon
Reply to  Michael Carter
February 1, 2017 9:49 am

Michael
Do a decadal average. You will see the warming is clear.

Richard Barraclough
February 1, 2017 8:00 am

The actual anomaly for one of these datasets (UAH V.6) is hot on your heels, and is 0.3 degrees, against your own estimate of 0.462, so I guess your modelling still needs a bit of refinement

February 2, 2017 2:03 am

Walter, Thanks for you very informative article. Could you please contact me at cosserat@gmail.com for further discussion? Regards, David Cosserat,