July 2017 Projected Temperature Anomalies from NCEP/NCAR Data

Guest Post By Walter Dnes

In continuation of my Temperature Anomaly projections, the following are my July projections, as well as last month’s projections for June, to see how well they fared. Note that I’ve changed to a different NCEP/NCAR reanalysis dataset as of the July 2017 projections. More details below.

Data Set Projected Actual Delta
HadCRUT4 2017/06 +0.583 +0.641 +0.058
HadCRUT4 2017/07 +0.680
GISS 2017/06 +0.81 +0.69 -0.12
GISS 2017/07 +0.77
UAHv6 2017/06 +0.384 +0.208 -0.176
UAHv6 2017/07 +0.253
RSS v3.3 2017/06 +0.486 +0.344 -0.142
RSS v3.3 2017/07 +0.354
RSS v4.0 2017/06 +0.539 +0.389 -0.150
RSS v4.0 2017/07 +0.446
NCEI 2017/06 +0.76 +0.82 +0.06
NCEI 2017/07 +0.85

The Data Sources

The latest data can be obtained from the following sources

Switching to a different NCEP reanalysis data set

Up til now, I’ve been using air.sig995.YYYY.nc data files from ftp directory

ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface

where YYYY is the year the data represents. As of this month I’m switching to air.YYYY.nc files from ftp directory: Ā ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/pressure/

(Citation: Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996.).

As its name suggests, the sig995.YYYY.nc data is valid at the 995 mb level, which is a good proxy for surface temperatures. Unfortunately, it has not worked well as a proxy for the satellite data sets. The air.YYYY.nc data has 17 pressure levels of data; 1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, and 10 millibars. A bit of experimentation indicates a very good correlation between satellite data sets, and the 700 mb pressure level data, when taking the appropriate global subset corresponding to the satellites’ coverage.

The 700 millibar data will be used for the satellite projections, until/unless something better comes along. To reduce the amount of files, downloading, etc, the 1000 millibar level data from the air.YYYY.nc files will be used as a proxy for surface temperatures. Thus, my surface data will no longer be identical to that on Nick Stokes’ web page, but it will probably still track closely. As with the 995 millibar data, GISS has a good correlation (0.836) with the 1000 millibar data, but HadCRUT and NCEI are both below 0.45.

The Latest 12 Months

The latest 12-month running mean (pseudo-year “9999”, highlighted in blue in the tables below) ranks anywhere from 2nd to 4th, depending on the data set. The following table ranks the top 10 warmest years for earch surface data set, as well as a pseudo “year 9999” consisting of the latest available 12-month running mean of anomaly data, i.e. July 2016 to June 2017.

HadCRUT4 GISS NCEI
Year Anomaly Year Anomaly Year Anomaly
2016 +0.775 2016 +0.992 2016 +0.948
2015 +0.761 9999 +0.911 2015 +0.908
9999 +0.700 2015 +0.871 9999 +0.868
2014 +0.576 2014 +0.752 2014 +0.747
2010 +0.558 2010 +0.714 2010 +0.703
2005 +0.545 2005 +0.696 2013 +0.673
1998 +0.537 2007 +0.659 2005 +0.667
2013 +0.513 2013 +0.658 2009 +0.641
2003 +0.509 2009 +0.648 1998 +0.638
2009 +0.506 1998 +0.639 2012 +0.628
2006 +0.505 2012 +0.637 2003 +0.619

Similarly, for the satellite data sets…

UAH RSS v3.3 RSS v4.0
Year Anomaly Year Anomaly Year Anomaly
2016 +0.510 2016 +0.573 2016 +0.778
1998 +0.484 1998 +0.550 9999 +0.616
9999 +0.354 2010 +0.474 1998 +0.611
2010 +0.333 9999 +0.411 2010 +0.555
2015 +0.265 2015 +0.382 2015 +0.513
2002 +0.217 2005 +0.335 2002 +0.422
2005 +0.199 2003 +0.320 2014 +0.411
2003 +0.186 2002 +0.315 2005 +0.400
2014 +0.176 2014 +0.273 2013 +0.394
2007 +0.160 2007 +0.252 2003 +0.385
2013 +0.134 2001 +0.247 2007 +0.333

January-through-June of 2017 were all cooler, in all 6 data sets, than the corresponding months in 2016. Therefore, July-through-December 2017 would have to be noticably warmer than the corresponding months in 2016 to beat the 2016 annual values and make 2017 “the warmest year ever”. “Never say never”, but it’s looking more difficult with each passing month.

The Graphs

The graph immediately below is a plot of recent NCEP/NCAR daily anomalies, versus 1994-2013 base. The second graph is a monthly version, going back to 1997. The trendlines are as follows…

  • Black – The longest line with a negative slope in the daily graph goes back to late May, 2015, as noted in the graph legend. On the monthly graph, it’s June 2015. This is slowly growing ever longer but nothing notable yet. Reaching back to 2005 or earlier would be a good start.
  • Green – This is the trendline from a local minimum in the slope around late 2004, early 2005. To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.
  • Pink – This is the trendline from a local minimum in the slope from mid-2001. Again, the anomaly needs to drop below this line to start working back to a pause to that date.
  • Red – The trendline back to a local minimum in the slope from late 1997. Again, the anomaly needs to drop below this line to start working back to a pause to that date.

NCEP/NCAR Daily Anomalies:

daily

NCEP/NCAR Monthly Anomalies:

monthly

Miscellaneous Notes

At the time of posting, the 6 monthly data sets were available through June 2017. The NCEP/NCAR reanalysis data runs 2 days behind real-time. Therefore, real daily data from July 1st through July 29th is used, and July the 30th and 31st are assumed to have the same anomaly as the 29th. For HadCRUT, GISS, and NCEI, the 1000 millibar data is used as a proxy. For RSS and UAH, subsets of the 700 millibar reanalysis are used, to match the latitude coverage provided by the satellites. In all cases the projection for a specific data set is obtained by

* subtracting the previous month’s NCEP/NCAR proxy anomaly value from this month’s value (1000 mb or 700 mb as appropriate)

* multiplying the result by the slope() of the data (previous 12 months) of the specific data set versus NCEP

* adding that result to the previous month’s value of the data set

0 0 votes
Article Rating
62 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Gloateus
July 31, 2017 2:40 pm

One prediction liable to be born out is that adjustments to the so-called “surface data” sets, ie packs of lies and flights of science fantasy, will keep cooling the past and warming the present.

Gloateus
Reply to  Gloateus
July 31, 2017 2:41 pm

The perpetrators of which need to spend time relaxing at Club Fed in orange pajamas.

ossqss
July 31, 2017 3:15 pm

What happened to Nino 3.4 recently?

Green Sand
Reply to  ossqss
July 31, 2017 3:25 pm

A classic ‘Slosh’ (technical term)
http://www.bom.gov.au/climate/enso/monitoring/nino3_4.png

July 31, 2017 3:22 pm

This is so accurate, good job. If you’re interested in raising temperatures I suggest you to visit the IPCC site, they publish summaries where are registered the temperature variations.

pat
July 31, 2017 3:35 pm

Bureau of Meteorology in Australia is being forced to respond to Jennifer Marohasy’s allegations of temp data tampering. behind paywall. hope some can access it. this article has been online for 9 hours and the story is being reported on at least one commercial radio station, yet no other MSM is carrying the news as yet!
1 Aug: Australian: Bureau of Meteorology opens cold case on temperature data
The Bureau of Meteorology has ordered a full review of temperature recording equipment and procedures after the peak weather agency was caught tampering with cold winter temperature logs in at least two locations. The bureau has admitted that a problem with recording very low temperatures is more widespread than Goulburn and the Snowy Mountains but …
http://www.theaustralian.com.au/national-affairs/climate/bureau-of-meteorology-opens-cold-case-on-temperature-data/news-story/c3bac520af2e81fe05d106290028b783

Steve
Reply to  pat
July 31, 2017 4:11 pm

You know this is 3 years ago don’t you. The story was amped up by this science denying newspaper owned by Murdoch. The then Prime Minister, Tony Abbott, another science denier, got his business adviser, not a scientist, to go and attack the Bureau of meteorology, the result. There was no investigation as there was no evidence to open an enquiry and the data gathered is real, 2017 heading towards hottest year on record again in Australia. Nice try to muddy the waters …. result …. total failure!

Gloateus
Reply to  Steve
July 31, 2017 6:09 pm

Steve,
The only “science d@niers” are the consensus Team.
Mann, Jones, Hansen, Schmidt, Trenbreth and their so far unindicted coconspirators rank right up there with eugenics proponents as the leading enemies of humanity among alleged “scientists”, which of course they aren’t. To be a scientist, you have to practice the scientific method.

Grant
Reply to  Steve
July 31, 2017 6:22 pm

There’s a reason, not the the weather, why you’re headed for the hottest year eva!

nankerphelge
Reply to  Steve
August 1, 2017 4:02 am

The BOM are in the news again Steve and their day will come. I would have bet my left ?? on two bodies that would never distort the truth ie The CSIRO and the BOM.
I am incredibly disheartened to find lots of evidence that suggests they are no better than that East Anglia mob of rogues. Remember them??
The BOM have written out hottest days on record such as Bourke and now they are trying to fiddle Goulburn’s coldest day on record, however they have been sprung again.
Surely you can’t ignore the stench of a rotting carcass. Or do you have a scientifically or statistically based answer for these actions?? I will listen.

Gary
Reply to  Steve
August 1, 2017 11:39 am

Steve: No, Pat is referring to a report on July 2, 2017, not the one three years ago. The tampering 3 years ago is referenced in this article, but the new tampering is with two sites just reported.
http://www.newsmax.com/TheWire/australia-climate-data-tampering/2017/08/01/id/805095/

Nick Stokes
Reply to  pat
August 1, 2017 4:30 am

“yet no other MSM is carrying the news as yet”
Of course not. Goulburn is a minor station, not used by any major indices. There was some confusion about whether the minimum on a record cold morning was -10.0 or -10.4°C. Only in Lloyd/Marohasy world would that be any kind of news.

bit chilly
Reply to  Nick Stokes
August 2, 2017 8:05 am

thankfully comments cannot be removed on this site. that one might come back to bite nick šŸ˜‰

Bruce
Reply to  Nick Stokes
August 4, 2017 7:04 am

4/10 of a degree is how many years worth of so-called warming? 10? 20?

pat
July 31, 2017 3:37 pm

1 Aug: Australian Editorial: Bureau clouds weather debate
In a time of climate change, itā€™s not surprising there is more interest in ā€” and scrutiny of ā€” the Bureau of Meteorology. A confident, outward-looking agency would seize this as an opportunity. Instead, as we report today, the bureau still struggles when called on to give a transparent account of its work. On July 2 in Goulburn, NSW, observant local Lance Pidgeon noticed the temperature on the bureau website had dropped to minus 10.4C. Next it read minus 10C, then the reading disappeared altogether. The original low reappeared after questions were put to the bureau.
One explanation from the agency is that results below minus 10C are flagged as possible anomalies and checked before they are restored. Yet the same system applies at the alpine Thredbo top station where temperatures as low as minus 14.7C have been registered. Seemingly at odds with its first explanation, the bureau also says machines at several cold weather stations have failed to record below minus 10C and will be replaced. In any event, the bureau insists, these failures will not skew the national weather records because the Goulburn and Thredbo stations do not feed into this official dataset. However, results from Goulburn are used to adjust readings from Canberra, which are included in the national dataset…
That adjustment process, known as homogenisation, has got the bureau in trouble in the past…READ ON
http://www.theaustralian.com.au/opinion/editorials/bureau-clouds-weather-debate/news-story/defe9d457e78517992d7c90b1d2275fc

pat
July 31, 2017 3:41 pm

unfortunately, Newman’s piece does not include the brea BoM news today:
1 Aug: Australian: Maurice Newman: Mediaā€™s silence of the climate scams
How lucky to have gatekeepers such as the ABC, SBS and Fairfax Media to protect us from the likes of Climate Depot founder Marc Morano, recently here promoting his documentary Climate Hustle?
Thanks to mainstream media censorship, Moranoā€™s groundbreaking film, which promised a heretical fact-finding journey through the propaganda-laced world of climate change, was denied publicity…
Australian scientist Jennifer Marohasy recently outed the Bureau of Meteorology for limiting the lowest temperature that an individual weather station can record. If this is accepted practice, no wonder American physicist Charles Anderson declares ā€œit is now perfectly clear that there are no reliable worldwide temperature recordsā€…READ ALL
http://www.theaustralian.com.au/opinion/medias-silence-of-the-climate-scams/news-story/b124752820c94822915f94917e6566b2
however, you have to “admire” BoM for this coming out today!
1 Aug: Townsville Bulletin: Helter swelter, itā€™s heating up
by ANDREW BACKHOUSE
Bureau of Meteorology senior climatologist Catherine Ganter said for the next three months there was a greater than 80 per cent chance of warmer days and nights compared to normal.
ā€œFrom what we can see there are much warmer than average sea surface temperatures all the way across the east coast and thatā€™s partly behind the warmer outlook,ā€ she said.
Towns along the coast of Queensland will be most affected by the warmer sea temperatures.
The forecast could mean warmer nights in particular.
ā€œIt tends to affect minimum temperatures more,ā€ Ms Ganter said…
Ms Ganter said the chances of wetter and drier conditions in August and Ā­October were about equal…
The average maximum temperature for July is likely to beat the previous record set in 1975 to be 2C above average.
Official readings began in 1910.
And the unseasonable warm conditions will continue.
ā€œIn Townsville itā€™s likely to be a degree warmer than normal,ā€ Ms Ganter said.
http://www.townsvillebulletin.com.au/news/helter-swelter-its-heating-up/news-story/9aa228c912323d5578d0cad1e830e44e

July 31, 2017 4:40 pm

“… you have to ā€œadmireā€ BoM … ” Not many of us do that. Checking on all this fiddling gets tiresome.
“ā€œIn Townsville itā€™s likely to be a degree warmer than normal,ā€ Ms Ganter said ”
Yep. Its what used to be called “spring”.

Gary Pearse
July 31, 2017 6:04 pm

Walter, a lot of work. I note that your temperatures turned out in most sets to have been too high. I would have counseled you to trim a bit off your temps by viewing the global temperature maps.
I have for many months posted on the large cold Blobs in the temperate zone which have replaced hot blobs that had persisted for several years. Moreover, over several months, instead of cold water upwelling much in the eastern Pacific equatorial zone, it was slanting down (and ‘up’) from the cold patches north and south of the equator. Also a wide cold band in the equatorial Atlantic and no warm pool in the west Pacific. The cool temperatures are now less a factor of ENSO. This decoupling is querying up your forecasts and the formulae of several other analysts that use ENSO data to calculate.
Personally, I think this unusual development presages worrying colder weather for a spell into the future. In the NH it has been a ‘year without (much) summer’ (personal experience with Canada and wife with Europe and Russia). I’m predicting a very cold NH winter this year, cool tropics (Oz has been so cold that the Oz met office has been caught clipping degrees off bitterly cold areas). There is too much cold water around to forecast a warm season ahead.

Gary Pearse
Reply to  Gary Pearse
July 31, 2017 6:13 pm

Oops link:
http://weather.unisys.com/surface/sst_anom.gif
You may have to click on it to get today’s map.

James at 48
Reply to  Gary Pearse
August 1, 2017 9:27 am

Fairly serious sea ice protrusion into the Indian Ocean this year. I consider north of 60 to be the Indian not Southern, YMMV.

angech
Reply to  Gary Pearse
July 31, 2017 8:36 pm

Not so. Walter averages a bit too low on the satellites and a bit too high on the stations in general. Perhaps he builds their biases in unconsciously.

angech
Reply to  angech
July 31, 2017 8:45 pm

Amazingly Australia is claimed to have hottest July in a hundred years.
Might mean there was a hotter temp 101 years ago.
This BOM adjustment stuff is big news.
The senior head of department has written a letter which makes a wild claim that a number of recording devices have mysteriously stopped working at exactly minus 10 degrees.
This needs a post, if someone can bring it to Anthony’s attention.
I presume they use the same thermometers over Australia,
The bureau itself sent a text advising that they set 10 degree cutoffs for cold data in some sites.
Where are ZEke and MOsher to explain……
Firstly setting limits on supposedly tamper proof recording equipment.
Would this not invalidate their use.
Secondly if say 6 out of 800 approx thermometers mysteriously stop at -10 degrees exactly …
What does this say about instrument reliability in general and the instruments themselves.
This story could be big.
It should lead to the head of department offering an apology for misleading and a slight drop in Aussie temps.
Cheers.

bitchilly
Reply to  angech
August 2, 2017 8:10 am

angech, does anyone know if they do similar capping on the upper limits of temperatures recorded ? i strongly suspect not.

Nick Stokes
Reply to  Gary Pearse
July 31, 2017 8:57 pm

“Oz has been so cold”
It hasn’t been cold at all. A few frosty mornings in the SE. The only places below average are in the SE – light green:comment image
Here is just July. Even warmer, especially in the north.
http://www.bom.gov.au/web03/ncc/www/awap/temperature/meananom/month/colour/latest.gif

Matthew Bruha
Reply to  Nick Stokes
August 1, 2017 1:37 am

Except, of course, for the data which may or may not be included because BOM may or may not have deemed it reliable because their equipment may or may not be faulty….

richard verney
Reply to  Nick Stokes
August 1, 2017 2:10 am

It ought to be compared with the late 1800s. But of course, BOM has decided to exclude that warm period.

Richard Barraclough
Reply to  Gary Pearse
August 1, 2017 6:42 am

The summer so far in the UK has been on the warm side, and with August still to come, it is shaping up to be within the warmest 10 years out of the last 100.
Meanwhile, most of southern Europe has been having a hot dry spell for several weeks, and Spain had its highest recorded temperature (possibly – depending on the validity of a couple of slightly higher claims in earlier years), of 46.9 at Cordoba on 14th July.
Neither a “Year without summer” nor “worryingly cold”. Perhaps your wife spent the hot afternoons in air-conditioned restaurants?

bitchilly
Reply to  Richard Barraclough
August 2, 2017 8:12 am

i can assure you the scottish summer has not been on the warm side richard . last time i looked scotland was still part of the uk .

Crispin in Waterloo but really in Beijing
July 31, 2017 10:24 pm

“To even BEGIN to work on a ā€œpause back to 2005ā€, the anomaly has to drop below the green line.”
While I accept this in principle I have a quibble as to the claim there is no ‘pause’. To qualify as an ‘increase’ the difference has to be statistically significant. It is not reasonable to have numbers and a calculation with an uncertainty of say, 0.2 degrees and then claim to have a rise or fall of a lower value than that. It just cannot be justified if the measurement ‘error’ (uncertainty) is larger than the ‘effect’.
No amount of statistical BS can hide this. Making thousands of different measurements with thousands of instruments does not qualify as a reduction in the uncertainty. This point is usually not grasped by the novice and no one promoting alarm is bothering to explain what constitutes a valid vs invalid claim.
The tiny differences between most years 2000-2016 are not rankable in the normal fashion as many of them are indistinguishable by any standard approach.

RW
Reply to  Crispin in Waterloo but really in Beijing
July 31, 2017 11:51 pm

Mostly agreed. But you want a distinction between statistically significant and statistically meaningful. Increasing the temporal sampling resolution would necessarily influence the former but not the latter. The latter is reflected in the magnitude of the slope, and the former can qualify the latter which I believe is the crux of your point.
The higher sampling rate increases N, reduces the standard error (SE) of the regression cofficient (i.e. the slope, ‘b’), narrows the theoretical sampling distribution of estimated regression coefficients from samples of size N, increases the t-value for the t-test (against zero or ‘null’) of our sample regression coefficient [t=(b/SE)], increases the degrees of freedom (df) associated with the theoretical distribution of t-values we use to approximate the null sampling distribution of b estimates (df=N-2), narrows that distribution, and reduces the deviation from zero required to exceed some conventionally very large proportion probable t-values (usually 0.475 on either side of zero).
Aside from the appropriateness or inappropriateness of using the t-distribution, there are a pile of linear regression assumptions that have to be met in order for the b estimate and SE estimate to be valid and, therefore, the aforementioned statistical inference to be valid.
Such as independent residuals. So, the autocorrelation probably needs to be removed as well, because one measurement will depend on the preceding ones. To do this, additional regressors (predictors) that are themselves the measurements but lagged by 1, 2, 3, etc. time points.
The process first outlined above is used by default. Checking the appropriateness of the assumptions is rarely ever reported, if even done in the first place, unfortunately.

Nick Stokes
Reply to  Crispin in Waterloo but really in Beijing
August 1, 2017 12:11 am

“No amount of statistical BS can hide this. Making thousands of different measurements with thousands of instruments does not qualify as a reduction in the uncertainty. “
As so often, leaving out what it is the uncertainty of. It is standard error of a mean, not the measurements, and in this case, the trend, which is a weighted mean. The standard error (uncertainty) of a mean is sample error – not measurement error. What might have happened if you had chosen a different sample. And that certainly does reduce with larger samples. That is why polls, drug trials etc spend money to get the largest samples they can afford.
Basic stats.

richard verney
Reply to  Nick Stokes
August 1, 2017 2:16 am

What might have happened if you had chosen a different sample.

And therein lies one of the significant problems with the so called time series.
The sample in 1860 is different to that in 1880, which is different to that in 1900, which is different to that in 1920, which is different to that in 1940, which is different to that in 1960, which is different to that in 1980, which is different to that in 2000, which is different to that today.
Throughout the time series almost every year involves a different sample, such that one cannot compare one year with another, or one year with the so called base period.
There is no meaningful anomaly because of the different sampling.

angech
Reply to  Nick Stokes
August 1, 2017 2:19 am

Nick Stokes July 31, 2017 at 8:57 pm
ā€œOz has been so coldā€
It hasnā€™t been cold at all. A few frosty mornings in the SE.
” temperature records have fallen as many Australians woke to freezing weather
Snow, hail and rain has fallen across parts of Victoria, South Australia and Tasmania, as a band of cloud followed by a pocket of cold, dry air crosses the Tasma
As forecast, Sydneysiders shivered through the cityā€™s coldest pair of mornings since 2008, with minimum temperatures plummeting to 5.4 degrees today and 5.8 degrees yesterday
Goulbourn, the coldest city in New South Waleā€™s south-west, reached -10.4 degrees today
The negative temperature was 12 degrees below Goulburnā€™s long-term morning average of 1.6 degrees. Canberra also experienced its coldest pair of weekend mornings in two decades Temperatures in the capital plummeted to -8.2 degrees today, following a biting reading of -8.7 degrees yesterday
It was the cityā€™s coldest pair of mornings since in 1971.
The last time Canberra experienced a weekend with mornings this cold, John Howard was in his first term as Prime Minister, Stuart Diver had just been rescued from a landslide at Thredbo and the film The Castle had just been released, Weatherzone meteorologist Ben Domensino said.
Minimum temperatures will climb closer to the July average of zero degrees from tomorrow, as cloud and wind increase over the ACT early next week.
In Melbourne, city temperatures dipped to 1 degree today, on par with yesterday morning.
Biting record temperatures hit other parts of Victoria, with Mildura dropping to -2.1 degrees today, making it the coldest July morning since July 6, 2012.
Shepparton shivered through its coldest morning since July 7, 2012, plummeting to -3.9 degrees
Australia to shiver through the weekend
Tasmania had another cold start today, with temperatures dropping to 1 degree in Hobart , -6 degrees at Butlerā€™s Gorge and -1 degree in Launceston.
Frost and ice is expected across much of Tasmania.
The snap-freeze is on the move, with a shift in weather patterns expected this week.
ā€œWe have a low pressure system and a cold front which has come across Western Australia and its coming into South Australia, bringing wind and showers in the south today,ā€ a Weatherzone spokeswoman said ā€œAs the low weakens, it is pulling some cold air. By about Monday, weā€™re looking at showers across Victoria and also southern parts of NSW.ā€
Moving into Tuesday, the low is expected to move across Victoria, and towards the New South Wales Coast ā€œMost of the action this week will be throughout southern parts of Australia. A lot of cold air is wrapped up in these lows so hopefully we should see some snow this week,”
I think Nick lives near me in central Victoria so would be well aware of the severe, repeat severe cold snaps we have had throughout July.
Shame.

Crispin in Waterloo but really in Beijing
Reply to  Nick Stokes
August 1, 2017 2:58 am

Nick
While I agree that the sample error is additional, it is not reasonable to dismiss the measurement uncertainty which seems to be near-universal when discussing temperatures. People are taking the temperature readings as gospel, literally. The uncertainties of each measurement are not being propagated through to the final answer. Adding a huge number of additional samples does not make the instruments more precise or more accurate. It is simply not true that measuring 10,000 times as many points (which reduces the sampling error) will reduce the uncertainty of all the individual measurements. This is basic stats too.
What I see repeatedly is people assuming the measurements are as utterly precise as their multi-decimal place calculator mantissa cares to display. There is (apparently) confusion about the difference between taking a single instrument to 1000 sites to make measurements at the same time, versus taking readings on 1000 different instruments, one at each site, at the same time. Further, taking 1 measurement with each instrument is not the same as taking 100 measurements at that moment on each of the 1000 instruments, one at each location.
Daily temperature measurements are the lowest quality of all possibilities: one measurement recorded from each instrument, a different one at each site.
Additional sampling at additional sites is a good idea of course, but each sample is of a ‘different thing’ so it cannot be treated the same as having used, for example, 10 instruments read once at each site, or 10 readings on one instrument at each site, to get a better fix on where the mean lies for each sampled location. Even estimating more exactly where the mean lies does not reduce the uncertainty about the measurements themselves which is an inherent property of the instruments.
Finally, measuring the temperature at 1000 sites using 1000 instruments that are plus-minus 0.02 degrees C does not mean the final answer, the average temperature, is plus-minus 0.02. That’s not how it works. There is a formula for error propagation.
The claim that 2015 was 0.001 degrees warmer than 2014 is not supported (or denied) by the evidence. No one knows because that is a value far smaller than the propagated uncertainties.

Nick Stokes
Reply to  Nick Stokes
August 1, 2017 3:17 am

Richard V,
“Throughout the time series almost every year involves a different sample”
Yes. That is why anomalies are essential. Comparing averages of different samples is possible, provided that the expected values of each is the same (or nearly). If you toss a coin 100 times, you get usually from 40-60 heads. That is still true if you toss 100 different coins, provided they are fair (homogeneous).
An analogy is the DJIA. There is a series that has been tracked for many years. Not only do the companies change, but their weighting is constantly changing. This requires adjustments with similar effect to anomalies. Not many people think the DJIA is thus invalidated.

Nick Stokes
Reply to  Nick Stokes
August 1, 2017 3:23 am

angech,
“I think Nick lives near me in central Victoria”
Well, I live in the big city. The daily numbers for July are here. Average max 14.5°, min 6.6. Long term averages (here) are 13.5 and 6.0, so it was warm on both counts. But there were some cold mornings.

Nick Stokes
Reply to  Nick Stokes
August 1, 2017 3:45 am

Crispin,
“Finally, measuring the temperature at 1000 sites using 1000 instruments that are plus-minus 0.02 degrees C does not mean the final answer, the average temperature, is plus-minus 0.02. Thatā€™s not how it works. There is a formula for error propagation.”
Yes, there is. Deviations cancel, and the contribution to error of the mean is reduced. Broadly, variances add, so the sum of N is N*each. But then you scale each down by N, reducing the variance of the contribution to mean by N^2. So with error 0.02 and 1000 in sample, the contribution of that source of error to the mean is .02/sqrt(1000) ~ 0.00063. Correlations may increase that, but it’s pretty small. Sampling error is much larger.

lewispbuckingham
Reply to  Nick Stokes
August 1, 2017 3:49 am

Driving down the local rural roads around Goulburn in cooler months I have seen Jack Frost with my own eyes.
At night the ice crystals he brings glisten like diamonds in a silver sky along vast tunnels of light picked out by the car headlights.
Just think, the local BOM has entered this fantastic fairy land with fantasy data.
Who would have believed that?

DWR54
Reply to  Nick Stokes
August 1, 2017 6:04 am

angech,

I think Nick lives near me in central Victoria so would be well aware of the severe, repeat severe cold snaps we have had throughout July.

I think he made it clear enough that he was responding to a comment claiming that ‘all’ of Australia was cold in July. The BOM July map above does indeed show below average temperatures in parts of Victoria; but it also shows above average temperatures in many other regions, including across much of the Northern Territory.
Too early to specify lower troposphere temperatures above Australia, but UAH is suggesting there was a considerable month-on-month temperature rise across the Southern Hemisphere in general; the anomaly is up from 0.09C in June to 0.27C in July: http://www.drroyspencer.com/2017/08/uah-global-temperature-update-for-july-2017-0-28-deg-c/

Crispin in Waterloo but really in Beijing
Reply to  Nick Stokes
August 1, 2017 5:27 pm

Nick, you persist in your avoiding the measurement uncertainty.
Emphasis added:
“Deviations cancel, and the contribution to error of the mean is reduced. Broadly, variances add, so the sum of N is N*each. But then you scale each down by N, reducing the variance of the contribution to mean by N^2. So with error 0.02 and 1000 in sample, the contribution of that source of error to the mean is .02/sqrt(1000) ~ 0.00063.”
You are referring to determining a better estimate of the true position of the mean, which is the middle of the range of uncertainty, but the measurement uncertainty remains unaffected. You get no cookie for that contribution. If you know the measurement uncertainty is Ā±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees. A critical element of this is that we are not measuring the temperature of one place a large number of times, we are measuring the temperature in a large number of places once each. There is no expectation that the readings will be the same in each location. Each measurement has an uncertainty, and together the uncertainty is larger than any one measurement’s. The average cannot be better known that the contributing components.
The measurement uncertainties are irreducible when performing computations. I can suggest this very brief document for instruction:
http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf
An average is a sum of values divided by a number known exactly, so is the same as propagating the uncertainties in the usual manner:
SQRT(uncertainty1^2+uncertainty2^2+uncertainty3^2+…uncertaintyN^2)
“So with error 0.02 and 1000 in sample”, assuming all the instruments are identical, the propagated measurement uncertainty is:
SQRT(0.02^2 * 1000) for an uncertainty of Ā±0.63 degrees. In theory the measurement errors could be as large as 20 degrees (0.02*1000) but that is extremely unlikely. It is just as unlikely that the true average lies exactly on the mean, however well its position is known.
The true average temperature lies somewhere within a span of 1.26 degrees (68% confidence) and the center of that span is known to Ā±0.00063 degrees as you demonstrated. There is a 32% chance that the true average of the 1000 measurements lies outside the 1.26 degree range.
The only way to reduce the final value of the propagated uncertainty is to make more accurate and precise measurements. If they were Ā±0.004 which is technically possible, the range is reduced to Ā±0.127*2 = 0.253. Obviously when averaging 10’s of thousands of measurements that are each Ā±0.2 degrees, the uncertainty of the final value is large. Climate scientists have been marketing the median as the ‘average temperature known with great precision’ by pretending that each measurement was perfect which is not only untrue, but impossible.

Nick Stokes
Reply to  Nick Stokes
August 1, 2017 8:54 pm

Crispin,
“If you know the measurement uncertainty is Ā±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees.”
No. These numbers, if based on sd, mean that for one reading of 16Ā±2, there is about a 2/3 chance that the reading lies between 14 and 18 (and 1/6 that it is >18). But suppose you take the mean of 100 such numbers, in different locations (same EV). What would have to happen for the mean to be >18 (if the range is also 16Ā±2)? Basically, almost all 100 errors would have to be positive, averaging +2. That is not a 1/6 chance; in fact it is very unlikely.
The adding of variance, and the consequent discounting of the contribution to the mean, accounts for this cancellation.

bitchilly
Reply to  Nick Stokes
August 2, 2017 8:13 am

the fact anomalies are used as opposed to outright temperatures is all people need to realise the level of obfuscation going on nick.

Crispin in Waterloo but really in Beijing
Reply to  Nick Stokes
August 2, 2017 4:58 pm

Nick,
You are still evading the point:
>>ā€œIf you know the measurement uncertainty is Ā±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees.ā€
>No. These numbers, if based on sd, mean that for one reading of 16Ā±2, there is about a 2/3 chance that the reading lies between 14 and 18 (and 1/6 that it is >18).
The point is they are not based on the SD. That is the uncertainty of the instrument’s readings. Look in the instructions. There is an uncertainty value provided by the manufacturer.
There is no ‘measurement’ without an uncertainty attached. Nick, you are again completely missing (or evading) the point. A measurement taken with a properly calibrated 4-wire RTD usually has an uncertainty of 0.02 degrees, obviously depending somewhat on the capability of the reading instrument. If it is a 6.5 digit device then the Ā±0.02 claim is correct. The reading precision is 0.01 but the uncertainty is 0.02, and uncalibrated after a year, it is about Ā±0.06 because they drift randomly.
A measurement error is not susceptible to diminution.
>But suppose you take the mean of 100 such numbers, in different locations (same EV). What would have to happen for the mean to be >18 (if the range is also 16Ā±2)? Basically, almost all 100 errors would have to be positive, averaging +2. That is not a 1/6 chance; in fact it is very unlikely.
That comment does not address the measurement uncertainty at all. You are just repeating things about the calculation, with greater certainty, of the position of the center of the range of uncertainty.
The magnitude of the uncertainty is an inherent property of the measurement apparatus. Making 10 million measurements will not make any one of them less uncertain. That uncertainty is an inherent property that propagates through all calculations using those measurements. It is that uncertainty which provides us the range limits within which the actual answer probably is to be found (with 68% confidence).
For those who are appalled by the revelation that regional or global temperatures cannot possibly be calculated to a precision of 0.001 degrees using inputs that are uncertain by Ā±0.02 or Ā±0.5 degrees, I have a way to explain the difference between using a firm count and a measurement. This will help you understand why Nick’s avoidance is essential for those wanting to support the meme that the global average temperature is known with any precision.
+++++
Cafeteria Uncertainty
Consider a school with students in 4 rooms and a large number of them in the largest, the cafeteria. What is the average number of students per room?
Room 1 = 12 students
Room 1 = 10 students
Room 1 = 20 students
Room 1 = 15 students
Cafeteria = There are students coming in and out and many of them are in motion so it is not possible to count them exactly. You can do the next best thing which is to count as accurately as you can making various additions and subtractions, finally arriving at a best estimate of 255 Ā±8. That is the best you are able to do with the observers and time and methods you have (together, “the apparatus”).
The number in each of the 4 small rooms is known exactly, so there is no uncertainty about the data. There is no “Ā±” involved as students do not come in fractions.
The average number of students in a room is therefore:
(12+10+20+15+(255 Ā±8)) / 5 = 62.4 Ā±Some number.
It is not “62.4” because there is uncertainty about exactly how many students are in the cafeteria. The true answer is literally not known because of a ‘measurement uncertainty’. The real answer is probably between 61 and 64. We don’t really know.
Conducting this exercise in 1000 similar schools will not reduce the magnitude of the uncertainty. Calculating the global average temperature is like trying to count the number of students in cafeterias only. All the uncertainties from all the estimates have to be accommodated in the final report. They do not ‘average out’ or ‘diminish’. They are hard-wired into the data and retained by the calculations.
The global temperature numbers you have been seeing pretend that all the measurements are ‘counted students’ and known absolutely, and that the average contains no “cafeteria uncertainty”. This is simply not true.
We can be very certain about the number of stations reporting because they can be counted. But every single measurement reported by every single station has an uncertainty attached to it. These uncertainties are compounded based on the formula given a few posts up-list. The linked Harvard paper explains how to propagate these uncertainties through different types of calculations. They never get smaller. With each calculation, the uncertainty as to the true value of the answer grows.

Dougmanxx
August 1, 2017 5:00 am

Temperature “anomaly” is meaningless without the “average temperature” used to calculate the “anomaly”. Having that is the only way you can compare one data set versus another. Since they keep changing the data, you end up in situations like we have now, where the “current warmest year ever” has a lower “average temperature” than years in the past.

Richard Barraclough
Reply to  Dougmanxx
August 1, 2017 6:50 am

I think it should be the other way round, if your aim is to assess whether it has been unusually hot or cold.
If I told you the average temperature for July in my local town was 18 degrees, it wouldn’t mean all that much. If I told you it was 2 degrees above the average for the past 30, or 100, years, or whatever, you’d know that it was unusually warm.
The anomaly is a more useful figure to give you that comparison at a glance.

Dougmanxx
Reply to  Richard Barraclough
August 2, 2017 4:21 am

You misunderstand, Lets say you tell me the “anomaly” for your town in July 1934 was 2 degrees above average. But due to “improvements made to the data” you discover 80 years later that the “anomaly” for that same July is now amazingly 1 degree BELOW “average”, for whatever arbitrary portion of time you’ve chosen. Let’s say that you mistakenly also gave me that 18 degree average temperature. Happily we can grab the current “data set”, and see that the “average temperature” for your town in July 1934 is now only 15 degrees! Wow! And now we can realize, that today’s “hottest ever July” in your town, is actually 1 degree cooler than it was in July, even though the reported “anomaly” is “the warmest EVAH”. I’ve seen this in the data. It is a complete sham.

Crispin in Waterloo but really in Beijing
Reply to  Walter Dnes
August 2, 2017 5:06 pm

Walter, what’s the uncertainty for those numbers? Was it +0.28 or +0.280?
What is the measurement uncertainty for satellite temperatures? Do they admit having one at all?
Maybe your prediction is within their uncertainty envelope.

Richard Barraclough
August 1, 2017 6:59 am

Yes, Walter. Much closer this time!!
And no dramatic reversal of any trends to get excited about. It looks as though 2017 is going to be rather cooler than both 1998 and 2016, and will be fighting 2010 for the bronze medal.

Solomon Green
August 1, 2017 7:10 am

Nick Stokes
“As so often, leaving out what it is the uncertainty of. It is standard error of a mean, not the measurements, and in this case, the trend, which is a weighted mean. The standard error (uncertainty) of a mean is sample error ā€“ not measurement error. What might have happened if you had chosen a different sample. And that certainly does reduce with larger samples. That is why polls, drug trials etc spend money to get the largest samples they can afford.”
Mr. Stokesā€™ comment as regards larger samples is accurate but only when the samples are chosen at random and the population being sampled has not changed between samples. Sadly there is reason to believe that neither of these criteria are present in adjusted temperature records.

DWR54
Reply to  Solomon Green
August 1, 2017 10:03 am

Solomon Green

Mr. Stokesā€™ comment as regards larger samples is accurate but only when the samples are chosen at random and the population being sampled has not changed between samples.

Are you saying that opinion pollsters poll exactly the same people every time before an election? Clearly that’s not the case. The variation between temperature stations is bound to be far lower over time than the variation between punters interviewed by election pollsters.
What matters in both cases is:
1) sufficient sample size, and
2) reasonable geographic (or demographic, in the case of polls) weighting.

Solomon Green
Reply to  DWR54
August 1, 2017 11:17 am

DWR54
No. I am not saying that.
1). Polsters should poll from the same population. There is no point in polling from different populations.
2) They should have strict rules as to what proportions to sample but those who accord with those rules should be selected at random. The rules should be designed to ensure that the distribution of the samples is roughly proportionate to the distribution of the population.
i suspect that we are probably on the same wavelength but you do not make it clear that the samples must always be taken from the same population.
If, for example, you are referring to opinion polls, the mean of a sample opinion taken on the first day of the month can be radically different from that taken from a similar (or even identical sample) taken on the eighth day of the same month,
Hence to combine the two samples in order to obtain a larger sample is nonsensical. Similar logic has to be applied to all sampling.

Nick Stokes
Reply to  DWR54
August 1, 2017 1:52 pm

Solomon Green
“1). Pollsters should poll from the same population.”
Yes, but they don’t poll the same people.
“2) They should have strict rules as to what proportions to sample but those who accord with those rules should be selected at random.”
That is the issue of homogeneity (of population). Ideally the proportions should be exact, but weighting can be used to correct if they aren’t. As long as you know.
“Hence to combine the two samples in order to obtain a larger sample is nonsensical.”
It’s a matter of degree and deciding what you are looking for. Polls are anyway taken over several days which are averaged. You can average polls over a month or a year, as long as you’re clear that it is a year average that you are looking for. Then you have to work about how you sampled in time. That’s an integration issue.

bitchilly
Reply to  DWR54
August 2, 2017 8:18 am

i think recent results of opinion polls around the globe tend to support solomons position. the poll results are junk, much like the temperature “data”.
see polls on trump, brexit and the recent uk election as examples.

Reply to  DWR54
August 3, 2017 4:43 am

Greetings Nick & Solomon,
Solomon Green: ā€œ1). Pollsters should poll from the same population.ā€
Nick Stokes: “Yes, but they donā€™t poll the same people.”
The main problem that pollsters have real trouble with is identifying the correct sub population to sample from in the first place, namely the actual voters who are motivated enough to show up and vote on election day (or by early-voting mail-in ballot), which is different than the population as a whole. It would be easy if everybody voted. But determining who all the motivated sub populations are, based on the current issues, is difficult and fluid, and little factors like the actual weather on election day can change the percentages of those sub populations that actually show up and vote.
The second problem that pollsters have is actually reaching the interested sub populations. The traditional method is by hard-wired telephone, which is increasingly unpopular today. I know many people that only have cell phones, and my hard-wired line is throttled via caller-id whitelisted call screening, so most calls go directly to an answering machine.
And as a resident of Chicagoland, I can tell you with absolute certainty that it is impossible, by any method, to reach the many thousands of the deceased voters that participate in elections every year. Why are they not represented in the polls?

August 1, 2017 8:26 am

Anomalies in thousandths of a degree C. ??
Predicting a month’s anomaly when it is 90% over,
and still getting it wrong ??
This is brilliant climate science satire.
I can barely stop laughing at the “precision”.
This article completely refutes the global warmunists.

Solomon Green
August 2, 2017 3:35 am

Nick Stokes,
“‘Hence to combine the two samples in order to obtain a larger sample is nonsensical.’
Itā€™s a matter of degree and deciding what you are looking for. Polls are anyway taken over several days which are averaged. You can average polls over a month or a year, as long as youā€™re clear that it is a year average that you are looking for. Then you have to work about how you sampled in time. Thatā€™s an integration issue.”
I agree about polls in general but DWR54 was writing about opinion polls. Opinions are fickle, political opinions in particular.
“Polls are anyway taken over several days which are averaged” and that explains why opinion polls have failed spectacularly in many countries over the last few years. The other main reason is that samples have not been properly constructed.

August 2, 2017 11:23 pm

Why use linear projections? Everything about the solar system is periodic. Is Fourier analysis used to estimate future temperatures in climate science? Just wondering. My back-of-the-envelope analysis of temperature trends combines probability estimates based on a sparse data-set and the rate of change of temperatures to predict a likely range in which a future temperature will lie in the near term. All that can be hoped for is to narrow likely range estimates with more data and better analyses. See http://www.uh.edu/nsm/earth- atmospheric/people/faculty/tom-bjorklund/ for a working paper, which is a few months out-of-date.
A lot of weather events are cited as evidence of long-term climate change. What’s up with that? Of what value are weather observations in long term predictions?