July 2016 Projected Temperature Anomalies from NCEP/NCAR Data

Guest Post By Walter Dnes

In continuation of my Temperature Anomaly projections, the following are my July projections, as well as last month’s projections for June, to see how well they fared.

Data Set Projected Actual Delta
HadCRUT4 2016/06 +0.777 (based on incomplete data) +0.737 -0.040
HadCRUT4 2016/07 +0.793
GISS 2016/06 +0.86 +0.79 -0.07
GISS 2016/07 +0.86
UAHv6 2016/06 +0.302 * +0.339 +0.037
UAHv6 2016/07 +0.327
RSS 2016/06 +0.485 +0.467 -0.018
RSS 2016/07 +0.407
NCEI 2016/06 +0.9390 +0.8987 -0.0403
NCEI 2016/07 +0.9575

The Data Sources

The latest data can be obtained from the following sources

Miscellaneous

At time of posting, all 5 monthly data sets were available through June 2016. The NCEP/NCAR re-analysis data runs 2 days behind real-time. Therefore, real data through July 29th is used, and the 30th and 31st are assumed to have the same anomaly as the 29th.

The July projections for HadCRUT4 and GISS and NCEI and UAH are higher than the June projections were. This is due to the fact that global NCEP/NCAR anomaly rose from +0.369 in June to +0.409 in July. The UAH-specific NCEP/NCAR anomaly (83.75°N to 83.75°S) went from +0.369 in June to +0.412 in July. The RSS-specific anomaly (81.25°N to 68.75°S) was down slightly, going from +0.436 in June to +0.407 in July. July will be the 12th consecutive month that has seen a record high monthly value set for the global anomaly (90°N to 90°S) for that calendar month. NCEP/NCAR re-analysis data goes back to the beginning of 1948.

* I got lucky with the UAH prediction for June. Due to a wrong entry in my spreadsheet, the projection was +0.302. It should’ve been +0.277, so I caught a break there.

0 0 votes
Article Rating
38 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 31, 2016 10:33 am

From the daily Climate Forecast System Reanlysis preliminary global temperature anomaly estimates reported by the University of Maine Climate Change Institute I get +0.40C for July compared to +0.27C for June, both referenced to 1981-2010. Below are the last several months in degrees Celsius referenced to 1981-2010.
0.72 Feb
0.67 Mar
0.56 Apr
0.42 May
0.27 Jun
0.40 Jul

TonyL
July 31, 2016 10:47 am

My bet for UAH Global TLT
Down 0.17 to an anomaly of 0.17. (Symmetry Counts!)
If I am wrong, you will never hear of it again. If I am right, you will never hear the end of it.

Reply to  TonyL
July 31, 2016 12:02 pm

I’d agree with Walter’s forecasts there. Small rises generally, except for RSS.

DWR54
Reply to  Nick Stokes
July 31, 2016 12:40 pm

Nick (or Walter),
In your opinion, how valid is it to use NCEP/NCAR data to predict lower troposphere temperatures such as those provided by the satellite producers RSS and UAH?
Thanks

Reply to  Nick Stokes
July 31, 2016 1:42 pm

DWR,
Yes, I often emphasise that surface and lower trop are two different places. Actually, NCEP/NCAR and other reanalyses do have results for various altitudes. I think at the moment surface and LT predictions are related through the trajectory of ENSO.

DWR54
Reply to  Nick Stokes
July 31, 2016 3:08 pm

Thanks Nick. I noticed that Walter’s UAH June prediction was off on the low side by about 20%. Assuming he used the same methodology for July, also that the surface/LT relationship you mention continues to hold true, then perhaps a UAH value of closer to 0.4 for July is more likely.

Reply to  Nick Stokes
August 1, 2016 6:02 pm

” perhaps a UAH value of closer to 0.4 for July is more likely”
0.39°C!

DWR54
Reply to  Nick Stokes
August 2, 2016 2:49 am

UAH 0.39!
Wow. All Walter has to do is up his monthly UAH estimate by 20% and he’s there.

Sparky
July 31, 2016 11:19 am

I’ll predict that you’ll never have a lucrative career doing this Walter, making predictions that can be quickly checked.

Editor
July 31, 2016 2:01 pm

Reply to DWR54 July 31, 2016 at 12:40 pm (because WordPress doesn’t allow deeply indented replies)
The coupling obviously isn’t 100%. The land and oceans warm/cool the lower troposphere, but there may be a lag while convection moves heat around. One thing to worry about, especially for RSS, is that the satellite data sets aren’t truly 100% global. Last month, I split my NCEP/NCAR output into 3 versions to try to emulate the satellite coverage…
* the original 100% global version for HadCRUT/GISS/NCEI
* a subset for RSS
* a subset for UAH
Because RSS only covers down to latitude 70 south, it misses warm/cold spots in the interior of Antarctica and at the north pole. UAH does so to a smaller degree.
Using the appropriate subset of the globe gave better agreement with the satellites for June than my first attempt for May. We’ll find out in a few days how well it worked for July.
I try to get as close as I can. But the satellite data set coverage borders fall half-way into the NCEP/NCAR grids of 2.5 degrees. So I have to either use 1.25 degree latitude too much coverage or 1.25 degree latitude too litle.

DWR54
Reply to  Walter Dnes
July 31, 2016 3:18 pm

Thank you for the reply Walter. I find your analysis interesting; also, it’s good to see someone be prepared to stick their neck out occasionally!
I get what you’re saying about the latitude issue re coverage; it’s more the altitude issue I was thinking about though. It’s noticeable that sometimes, on a monthly level, the satellite and surface data can diverge on occasion; though the long term trends seem more consistent.
Nick mentioned above that NCEP/NCAR produce lower troposphere data too. Do you use these for your UAH/RSS monthly forecasts, or do you use the surface data?

Editor
Reply to  DWR54
July 31, 2016 3:58 pm

I use the surface data. It was easier for me to simply extract/analyse a subset of my existing data, rather than to download and work on another dataset.

July 31, 2016 2:36 pm

I started plotting the GFS/NCEP 2m temperature on 16th June. I understand this is behind a pay walled site somewhere but as the data is free I extracted my own version anyhow @ http://weatherpad.uk under menu GFS GLOBAL. Wasn’t really sure how I was supposed to calculate the mean so it’s just the 52,000 odd grid points divided by them! Sounds like an average to me! I’m somewhat heartened the uptick in July shows up anyhow, so I must be more correct than wrong. The plot shows the sea saw of the 4 daily runs. Need to do a bit if work on the Axis labelling, the point on the Left is 16th June but the labels a bit swamped out! A work in progress, and I am archiving the individual Grid Points so can reapply any calculation other than just the Average of them all!

Editor
Reply to  Cheshire Pete
July 31, 2016 3:10 pm

The global mean temp should be around 15 C. 8.63 is way too low. You also said that you took a raw average, which is likely the problem. Most data sets use “grid squares” (actually curved rectangles). 52,000 values implies a 1.25 degree by 1.25 degree grid. You have to weight the grids by area. That means that each grid square has to be weighted by COSINE() of latitude of that grid square’s centre point. This will give higher weighting to warm tropical areas, and less weighting to colder temperate/polar regions, and should bring the global temperature to something appropriate.
To speed things up with my calulations, I do all the grid squares at a given latitude, average them up, and do the cosine weighting for the entire latitude band as the last step. This cuts down on the number-crunching, and round-off errors.

Reply to  Cheshire Pete
July 31, 2016 3:11 pm

“Wasn’t really sure how I was supposed to calculate the mean so it’s just the 52,000 odd grid points divided by them!”
You do need latitude weighting – your average of 8.63°C is way too low, because polar values have too much influence. You can go a good way by just multiplying each value by cosine(latitude) and then dividing by the total of the cosines (instead of 52000). Fancier methods here.

Reply to  Nick Stokes
July 31, 2016 10:29 pm

Fascinating, I’ll take a look at this. It isn’t a problem doing it per Grid Point as I process the Kelvin value from the data per grid and keep a tally of the average as I loop through this, so it’s just another calculation to add in to my loop. It’s actually the 1 Deg Grid I’m using. 181 x 361 = 65,341 points. I didn’t see the point of using the 0.25 Deg grid as its massive amounts of extra data and wasn’t convinced it would add any value to the calculation.

Reply to  Nick Stokes
August 1, 2016 1:57 am

I’ve applied the method you suggested now and get the higher figure as a result. Thanks for helping me refine that. I’ll do some more calculations on my graph to show monthly means and perhaps an average of the 4 daily runs. I guess there might be a data set for the last 30 year average so I could calculate an anomaly figure. Perhaps it on the NCEP site somewhere.

Reply to  Nick Stokes
August 1, 2016 3:38 pm

“I guess there might be a data set for the last 30 year average so I could calculate an anomaly figure.”
I keep a zipfile here of my integrations of NCEP/NCAR reanalysis, daily and monthly, back to 1994. Before that, the data is not so good, and there are missing values, which is harder to handle than a complete grid. So I use 1994-2013 as the base period.

charles nelson
July 31, 2016 4:16 pm

NCEI 2016/06 +0.9390 +0.8987 -0.0403
Is there really…a ‘scientist’ out there who really believes that a description of GLOBAL temperature can be made down to the third decimal place?

DWR54
Reply to  charles nelson
July 31, 2016 5:13 pm

Apparently so, because most of the major global temperature data producers, including Roy Spencer at UAH, report averages to that level of precision.
That level of precision is of course the result of the averaging process; it’s not meant to be a reflection of the level of accuracy of any particular measuring device or system.

charles nelson
Reply to  DWR54
July 31, 2016 5:55 pm

So it’s a meaningless artefact then?

Michael Jankowski
Reply to  DWR54
July 31, 2016 6:25 pm

It’s pretty silly. I think it was NOAA who was reporting annual global temps down to 0.01 deg F…then Phil Jones made some adjustments with the Hadley set in the 2000s, so NOAA adjusted their methods and/or coverage to follow suit, and suddenly the NOAA average was 5 degrees higher and no longer comparable to the data from previous years.

Michael Jankowski
Reply to  DWR54
July 31, 2016 6:29 pm

(one reason anomalies compared to a reference period make things so much more palatable)

Reply to  DWR54
July 31, 2016 10:46 pm

“That level of precision is of course the result of the averaging process; it’s not meant to be a reflection of the level of accuracy of any particular measuring device or system.”
Bingo!
there is a little more to it than that– but essentially right

Tom Dayton
Reply to  charles nelson
July 31, 2016 5:35 pm

Look up the Law of Large Numbers.

charles nelson
Reply to  Tom Dayton
July 31, 2016 6:00 pm

Care to demonstrate how the Law of Large Numbers can give me an answer in Millimetres when the smallest interval I can reliably measure with my measuring device is say…one metre?

Reply to  charles nelson
July 31, 2016 10:48 pm

“Is there really…a ‘scientist’ out there who really believes that a description of GLOBAL temperature can be made down to the third decimal place?”
Nope.
That is not what it represents.

Gregory Barton
Reply to  charles nelson
August 1, 2016 3:32 am

If there is no accuracy in the precision it should be rounded up or down to a level where accuracy can be claimed for the period in question. But what degree of accuracy can be credibly claimed? As the length of the trend increases and homogeneity ‘corrections’ are made to estimate measurement error, sampling error, bias in sampling methods and coverage, to name a few, claims of accuracy, and so, precision, should be reduced. Are they?
Brohan et al. acknowledge that,
“a definitive assessment of uncertainties is impossible, because it is always possible that
some unknown error has contaminated the data, and no quantitative allowance can be
made for such unknowns.” http://hadobs.metoffice.gov.uk/crutem3/HadCRUT3_accepted.pdf
A definitive assessment of uncertainties is impossible! They nonetheless proceed to do the impossible: Read through the paper, with calculations combining error, blending datasets and ‘adjusting’ grid box variances, and it is apparent that the whole exercise is one of guesswork. One would have to be credulous to assign any accuracy to the decimal point value.

Bindidon
Reply to  charles nelson
August 1, 2016 1:58 pm

Yes charles: there is one who does not only believe it but experienced what happens when he forgets the necessary precision during calculations.
In theory you are right: it makes few difference to measure somewhere on Earth 17.7 °C or 17.8 °C at some time of day.
But in practice, things become different:
– temperature data used isn’t the data measured, since absolute temperatures mostly are not suitable. So deltas (“anomalies”) wrt to a reference period (e.g. 1951-1980 or 1981-2010) are computed.
– to obtain a mean temperature within a region, you must build the mean of several measurements (over 1,000 GHCN stations for the USA, over 30,000 weather stations worldwide for Berkeley Earth).
– data measured isn’t necessarily used in one single context; maybe several series of temperature anomalies must be compared, what implies them to be adjusted if their reference periods (“baselines”) differ.
Each time you build a mean of somewhat you lose precision. Thus…

KLohrn
July 31, 2016 5:18 pm

Looks like they might be a Pause,
though it will only be the 1st Pause and given credit to recent global “green” efforts…

A C Osborn
August 1, 2016 3:58 am

With some Sea Surface Temp dropping due to the Enso switch why would the overall average anomaly be expected to rise?
Some SSTs are showing massive drops compared to the currently calculated anomalies.

toncul
August 1, 2016 3:59 am

if you use the actual value to get the “projected” one, which I guess you do or you would be stupid (I don’t say you are not).
Then it’s not a “projection” but a “forecast”.
It’s a detail, but it is a manifestation of strong limitations in your understanding.

toncul
Reply to  toncul
August 1, 2016 4:23 am

Hum, I was too fast on that one. Didn’t read the title…

toncul
August 1, 2016 4:02 am

By the way, I guess most of the anomalies will go up….

Editor
August 1, 2016 7:41 am

UAH preliminary number for July is +0.39… http://www.drroyspencer.com/2016/08/uah-global-temperature-update-for-july-2016-0-39-deg-c/
I assume the final number will be somewhere in the range +0.385 to +0.394. By the way, the ENSO 3.4 te,perature drop has come to a screeching halt. Weekly values from http://www.cpc.ncep.noaa.gov/data/indices/wksst8110.for for week centred on Wednesday…
22JUN2016 -0.4
29JUN2016 -0.4
06JUL2016 -0.4
13JUL2016 -0.6
20JUL2016 -0.6
27JUL2016 -0.5
JAMSTEC has backed off considerably on their ENSO forecast. It now calls for neutral ENSO for the next several months. See… http://www.jamstec.go.jp/frsgc/research/d1/iod/sintex_f1_forecast.html.en and select “El Nino index” from the “Parameter:” drop-down menu.

Bindidon
Reply to  Walter Dnes
August 1, 2016 1:23 pm

Walter Dnes on August 1, 2016 at 7:41 am
Yes, but… they wrote
Since the NCEP GODAS shows an anomalously cold subsurface condition almost all the way along the equator even in July, the SINTEX-F model prediction might be biased by the simple SST data assimilation scheme. We need to be careful about the present prediction.
Nevertheless I think your appreciation is correct.
The MEI
http://www.esrl.noaa.gov/psd/enso/mei/
is probably the most elaborate index concerning ENSO events. A quick view on its monthly plot since 1950comment image
might convince you that in most cases a strong El Niño is followed by a rather weak La Niña. And the inverse seems to hold as well.

Editor
August 3, 2016 9:37 pm

RSS just came in for July. It’s up very slightly to +0.469, versus +0467 in June. For comparison, my projection for July was +0.407

Editor
August 4, 2016 9:23 am

UAH 3-digit data for July is in. It’s +0.389, versus my projection of +0.327

%d bloggers like this: