Guest Post By Walter Dnes
In continuation of my Temperature Anomaly projections, the following are my July projections, as well as last month’s projections for June, to see how well they fared.
Data Set | Projected | Actual | Delta |
---|---|---|---|
HadCRUT4 2016/06 | +0.777 (based on incomplete data) | +0.737 | -0.040 |
HadCRUT4 2016/07 | +0.793 | ||
GISS 2016/06 | +0.86 | +0.79 | -0.07 |
GISS 2016/07 | +0.86 | ||
UAHv6 2016/06 | +0.302 * | +0.339 | +0.037 |
UAHv6 2016/07 | +0.327 | ||
RSS 2016/06 | +0.485 | +0.467 | -0.018 |
RSS 2016/07 | +0.407 | ||
NCEI 2016/06 | +0.9390 | +0.8987 | -0.0403 |
NCEI 2016/07 | +0.9575 |
The Data Sources
The latest data can be obtained from the following sources
- HadCRUT4 http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt
- GISS http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
- UAH http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/tltglhmam_6.0beta5.txt
- RSS ftp://ftp.ssmi.com/msu/monthly_time_series/rss_monthly_msu_amsu_channel_tlt_anomalies_land_and_ocean_v03_3.txt
- NCEI https://www.ncdc.noaa.gov/cag/time-series/global/globe/land_ocean/p12/12/1880-2016.csv
Miscellaneous
At time of posting, all 5 monthly data sets were available through June 2016. The NCEP/NCAR re-analysis data runs 2 days behind real-time. Therefore, real data through July 29th is used, and the 30th and 31st are assumed to have the same anomaly as the 29th.
The July projections for HadCRUT4 and GISS and NCEI and UAH are higher than the June projections were. This is due to the fact that global NCEP/NCAR anomaly rose from +0.369 in June to +0.409 in July. The UAH-specific NCEP/NCAR anomaly (83.75°N to 83.75°S) went from +0.369 in June to +0.412 in July. The RSS-specific anomaly (81.25°N to 68.75°S) was down slightly, going from +0.436 in June to +0.407 in July. July will be the 12th consecutive month that has seen a record high monthly value set for the global anomaly (90°N to 90°S) for that calendar month. NCEP/NCAR re-analysis data goes back to the beginning of 1948.
* I got lucky with the UAH prediction for June. Due to a wrong entry in my spreadsheet, the projection was +0.302. It should’ve been +0.277, so I caught a break there.
From the daily Climate Forecast System Reanlysis preliminary global temperature anomaly estimates reported by the University of Maine Climate Change Institute I get +0.40C for July compared to +0.27C for June, both referenced to 1981-2010. Below are the last several months in degrees Celsius referenced to 1981-2010.
0.72 Feb
0.67 Mar
0.56 Apr
0.42 May
0.27 Jun
0.40 Jul
My bet for UAH Global TLT
Down 0.17 to an anomaly of 0.17. (Symmetry Counts!)
If I am wrong, you will never hear of it again. If I am right, you will never hear the end of it.
I’d agree with Walter’s forecasts there. Small rises generally, except for RSS.
Nick (or Walter),
In your opinion, how valid is it to use NCEP/NCAR data to predict lower troposphere temperatures such as those provided by the satellite producers RSS and UAH?
Thanks
DWR,
Yes, I often emphasise that surface and lower trop are two different places. Actually, NCEP/NCAR and other reanalyses do have results for various altitudes. I think at the moment surface and LT predictions are related through the trajectory of ENSO.
Thanks Nick. I noticed that Walter’s UAH June prediction was off on the low side by about 20%. Assuming he used the same methodology for July, also that the surface/LT relationship you mention continues to hold true, then perhaps a UAH value of closer to 0.4 for July is more likely.
” perhaps a UAH value of closer to 0.4 for July is more likely”
0.39°C!
UAH 0.39!
Wow. All Walter has to do is up his monthly UAH estimate by 20% and he’s there.
I’ll predict that you’ll never have a lucrative career doing this Walter, making predictions that can be quickly checked.
Reply to DWR54 July 31, 2016 at 12:40 pm (because WordPress doesn’t allow deeply indented replies)
The coupling obviously isn’t 100%. The land and oceans warm/cool the lower troposphere, but there may be a lag while convection moves heat around. One thing to worry about, especially for RSS, is that the satellite data sets aren’t truly 100% global. Last month, I split my NCEP/NCAR output into 3 versions to try to emulate the satellite coverage…
* the original 100% global version for HadCRUT/GISS/NCEI
* a subset for RSS
* a subset for UAH
Because RSS only covers down to latitude 70 south, it misses warm/cold spots in the interior of Antarctica and at the north pole. UAH does so to a smaller degree.
Using the appropriate subset of the globe gave better agreement with the satellites for June than my first attempt for May. We’ll find out in a few days how well it worked for July.
I try to get as close as I can. But the satellite data set coverage borders fall half-way into the NCEP/NCAR grids of 2.5 degrees. So I have to either use 1.25 degree latitude too much coverage or 1.25 degree latitude too litle.
Thank you for the reply Walter. I find your analysis interesting; also, it’s good to see someone be prepared to stick their neck out occasionally!
I get what you’re saying about the latitude issue re coverage; it’s more the altitude issue I was thinking about though. It’s noticeable that sometimes, on a monthly level, the satellite and surface data can diverge on occasion; though the long term trends seem more consistent.
Nick mentioned above that NCEP/NCAR produce lower troposphere data too. Do you use these for your UAH/RSS monthly forecasts, or do you use the surface data?
I use the surface data. It was easier for me to simply extract/analyse a subset of my existing data, rather than to download and work on another dataset.
I started plotting the GFS/NCEP 2m temperature on 16th June. I understand this is behind a pay walled site somewhere but as the data is free I extracted my own version anyhow @ http://weatherpad.uk under menu GFS GLOBAL. Wasn’t really sure how I was supposed to calculate the mean so it’s just the 52,000 odd grid points divided by them! Sounds like an average to me! I’m somewhat heartened the uptick in July shows up anyhow, so I must be more correct than wrong. The plot shows the sea saw of the 4 daily runs. Need to do a bit if work on the Axis labelling, the point on the Left is 16th June but the labels a bit swamped out! A work in progress, and I am archiving the individual Grid Points so can reapply any calculation other than just the Average of them all!
The global mean temp should be around 15 C. 8.63 is way too low. You also said that you took a raw average, which is likely the problem. Most data sets use “grid squares” (actually curved rectangles). 52,000 values implies a 1.25 degree by 1.25 degree grid. You have to weight the grids by area. That means that each grid square has to be weighted by COSINE() of latitude of that grid square’s centre point. This will give higher weighting to warm tropical areas, and less weighting to colder temperate/polar regions, and should bring the global temperature to something appropriate.
To speed things up with my calulations, I do all the grid squares at a given latitude, average them up, and do the cosine weighting for the entire latitude band as the last step. This cuts down on the number-crunching, and round-off errors.
“Wasn’t really sure how I was supposed to calculate the mean so it’s just the 52,000 odd grid points divided by them!”
You do need latitude weighting – your average of 8.63°C is way too low, because polar values have too much influence. You can go a good way by just multiplying each value by cosine(latitude) and then dividing by the total of the cosines (instead of 52000). Fancier methods here.
Fascinating, I’ll take a look at this. It isn’t a problem doing it per Grid Point as I process the Kelvin value from the data per grid and keep a tally of the average as I loop through this, so it’s just another calculation to add in to my loop. It’s actually the 1 Deg Grid I’m using. 181 x 361 = 65,341 points. I didn’t see the point of using the 0.25 Deg grid as its massive amounts of extra data and wasn’t convinced it would add any value to the calculation.
I’ve applied the method you suggested now and get the higher figure as a result. Thanks for helping me refine that. I’ll do some more calculations on my graph to show monthly means and perhaps an average of the 4 daily runs. I guess there might be a data set for the last 30 year average so I could calculate an anomaly figure. Perhaps it on the NCEP site somewhere.
“I guess there might be a data set for the last 30 year average so I could calculate an anomaly figure.”
I keep a zipfile here of my integrations of NCEP/NCAR reanalysis, daily and monthly, back to 1994. Before that, the data is not so good, and there are missing values, which is harder to handle than a complete grid. So I use 1994-2013 as the base period.
NCEI 2016/06 +0.9390 +0.8987 -0.0403
Is there really…a ‘scientist’ out there who really believes that a description of GLOBAL temperature can be made down to the third decimal place?
Apparently so, because most of the major global temperature data producers, including Roy Spencer at UAH, report averages to that level of precision.
That level of precision is of course the result of the averaging process; it’s not meant to be a reflection of the level of accuracy of any particular measuring device or system.
So it’s a meaningless artefact then?
It’s pretty silly. I think it was NOAA who was reporting annual global temps down to 0.01 deg F…then Phil Jones made some adjustments with the Hadley set in the 2000s, so NOAA adjusted their methods and/or coverage to follow suit, and suddenly the NOAA average was 5 degrees higher and no longer comparable to the data from previous years.
(one reason anomalies compared to a reference period make things so much more palatable)
“That level of precision is of course the result of the averaging process; it’s not meant to be a reflection of the level of accuracy of any particular measuring device or system.”
Bingo!
there is a little more to it than that– but essentially right
Look up the Law of Large Numbers.
Care to demonstrate how the Law of Large Numbers can give me an answer in Millimetres when the smallest interval I can reliably measure with my measuring device is say…one metre?
“Is there really…a ‘scientist’ out there who really believes that a description of GLOBAL temperature can be made down to the third decimal place?”
Nope.
That is not what it represents.
If there is no accuracy in the precision it should be rounded up or down to a level where accuracy can be claimed for the period in question. But what degree of accuracy can be credibly claimed? As the length of the trend increases and homogeneity ‘corrections’ are made to estimate measurement error, sampling error, bias in sampling methods and coverage, to name a few, claims of accuracy, and so, precision, should be reduced. Are they?
Brohan et al. acknowledge that,
“a definitive assessment of uncertainties is impossible, because it is always possible that
some unknown error has contaminated the data, and no quantitative allowance can be
made for such unknowns.” http://hadobs.metoffice.gov.uk/crutem3/HadCRUT3_accepted.pdf
A definitive assessment of uncertainties is impossible! They nonetheless proceed to do the impossible: Read through the paper, with calculations combining error, blending datasets and ‘adjusting’ grid box variances, and it is apparent that the whole exercise is one of guesswork. One would have to be credulous to assign any accuracy to the decimal point value.
Yes charles: there is one who does not only believe it but experienced what happens when he forgets the necessary precision during calculations.
In theory you are right: it makes few difference to measure somewhere on Earth 17.7 °C or 17.8 °C at some time of day.
But in practice, things become different:
– temperature data used isn’t the data measured, since absolute temperatures mostly are not suitable. So deltas (“anomalies”) wrt to a reference period (e.g. 1951-1980 or 1981-2010) are computed.
– to obtain a mean temperature within a region, you must build the mean of several measurements (over 1,000 GHCN stations for the USA, over 30,000 weather stations worldwide for Berkeley Earth).
– data measured isn’t necessarily used in one single context; maybe several series of temperature anomalies must be compared, what implies them to be adjusted if their reference periods (“baselines”) differ.
Each time you build a mean of somewhat you lose precision. Thus…
Looks like they might be a Pause,
though it will only be the 1st Pause and given credit to recent global “green” efforts…
With some Sea Surface Temp dropping due to the Enso switch why would the overall average anomaly be expected to rise?
Some SSTs are showing massive drops compared to the currently calculated anomalies.
if you use the actual value to get the “projected” one, which I guess you do or you would be stupid (I don’t say you are not).
Then it’s not a “projection” but a “forecast”.
It’s a detail, but it is a manifestation of strong limitations in your understanding.
Hum, I was too fast on that one. Didn’t read the title…
By the way, I guess most of the anomalies will go up….
UAH preliminary number for July is +0.39… http://www.drroyspencer.com/2016/08/uah-global-temperature-update-for-july-2016-0-39-deg-c/
I assume the final number will be somewhere in the range +0.385 to +0.394. By the way, the ENSO 3.4 te,perature drop has come to a screeching halt. Weekly values from http://www.cpc.ncep.noaa.gov/data/indices/wksst8110.for for week centred on Wednesday…
22JUN2016 -0.4
29JUN2016 -0.4
06JUL2016 -0.4
13JUL2016 -0.6
20JUL2016 -0.6
27JUL2016 -0.5
JAMSTEC has backed off considerably on their ENSO forecast. It now calls for neutral ENSO for the next several months. See… http://www.jamstec.go.jp/frsgc/research/d1/iod/sintex_f1_forecast.html.en and select “El Nino index” from the “Parameter:” drop-down menu.
Walter Dnes on August 1, 2016 at 7:41 am
Yes, but… they wrote
Since the NCEP GODAS shows an anomalously cold subsurface condition almost all the way along the equator even in July, the SINTEX-F model prediction might be biased by the simple SST data assimilation scheme. We need to be careful about the present prediction.
Nevertheless I think your appreciation is correct.
The MEI
http://www.esrl.noaa.gov/psd/enso/mei/
is probably the most elaborate index concerning ENSO events. A quick view on its monthly plot since 1950
might convince you that in most cases a strong El Niño is followed by a rather weak La Niña. And the inverse seems to hold as well.
RSS just came in for July. It’s up very slightly to +0.469, versus +0467 in June. For comparison, my projection for July was +0.407
UAH 3-digit data for July is in. It’s +0.389, versus my projection of +0.327