Guest Post By Walter Dnes
In continuation of my Temperature Anomaly projections, the following are my March projections, as well as last month’s projections for February, to see how well they fared.
Data Set | Projected | Actual | Delta |
---|---|---|---|
HadCRUT4 2017/02 | +0.817 (incomplete data) | ||
HadCRUT4 2017/03 | +0.817 (incomplete data) | ||
GISS 2017/02 | +1.02 | +1.10 | +0.08 |
GISS 2017/03 | +1.03 | ||
UAHv6 2017/02 | +0.544 | +0.348 | -0.196 |
UAHv6 2017/03 | +0.351 | ||
RSS 2017/02 | +0.606 | +0.440 | -0.166 |
RSS 2017/03 | +0.437 | ||
NCEI 2017/02 | +0.9849 | +0.9782 | -0.0067 |
NCEI 2017/02 | +0.9831 |
The Data Sources
The latest data can be obtained from the following sources
- HadCRUT4 http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.5.0.0.monthly_ns_avg.txt
- GISS https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
- UAH http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/tltglhmam_6.0.txt
- RSS http://data.remss.com/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt
- NCEI https://www.ncdc.noaa.gov/cag/time-series/global/globe/land_ocean/p12/12/1880-2017.csv
Miscellaneous Notes
At the time of posting 4 of the 5 monthly data sets were available through February 2017. HadCRUT4 is available through January 2017. The NCEP/NCAR re-analysis data runs 2 days behind real-time. Therefore, real daily data From February 28th through March 29th is used, and the 30th is assumed to have the same anomaly as the 29th.
The projections for the surface data sets (HadCRUT4, GISS, and NCEI) are derived from the previous 12 months of NCEP/NCAR anomalies compared to the same months’ anomalies for each of the 3 surface data sets. For each of the 3 data sets, the slope() value (“m”) and the intercept() value (“b”) are calculated. Using the current month’s NCEP/NCAR anomaly as “x”, the numbers are plugged into the high-school linear equation “y = mx + b” and “y” is the answer for the specific data set. The entire globe’s NCEP/NCAR data is used for HadCRUT, GISS, and NCEI.
For RSS and UAH, subsets of global data are used, to match the latitude coverage provided by the satellites. I had originally used the same linear extrapolation algorithm for the satellite data sets as for the surface sets, but the projections for RSS and UAH have been consistently too high the past few months. Given that the March NCEP/NCAR UAH and RSS subset anomalies are almost identical to February, but the linear extrapolations are noticeably higher, something had to change. I looked into the problem and changed the projection method for the satellite data sets.
The Problem
The next 2 graphs show recent UAH and RSS actual anomalies versus the respective NCEP/NCAR anomalies for the portions of the globe covered by each of the satellite data sets. RSS actual (green) anomaly tracked slightly above its NCEP/NCAR equivalant through November 2016 (2016.917). But from December 2016 (2017.000) onwards, it has been slightly below. Similarly, UAH actual anomaly tracked its NCEP/NCAR equivalant closely through November 2016, but fell and remained below it from December 2016 onwards. I’m not speculating why this has happened, but merely acknowledging the observed numbers.
https://wattsupwiththat.files.wordpress.com/2017/03/rss1.png
https://wattsupwiththat.files.wordpress.com/2017/03/uah.png
The Response
Since the switchover in December, the actual satellite anomalies have paralleled their NCEP/NCAR subsets, but with a different offset than before. So I take the difference (current month minus previous month) in the NCEP/NCAR subset anomalies, multiply by the slope(), and add to the previous month’s anomaly. E.g. for the March 2017 UAH projection…
- subtract the February 2017 UAH subset NCEP/NCAR anomaly from the March number
- multiply the result of step 1 by the slope of Mar-2016-to-Feb-2017 UAH anomalies versus the NCEP/NCAR subset anomalies for the UAH satellite coverage area.
- add the result of step 2 to the observed February UAH anomaly, giving the March projected anomaly
The graph immediately below is a plot of recent NCEP/NCAR daily anomalies, versus 1994-2013 base, similar to Nick Stokes’ web page. The second graph is a monthly version, going back to 1997. The trendlines are as follows…
- Black – The longest line with a negative slope in the daily graph goes back to early July, 2015, as noted in the graph legend. On the monthly graph, it’s August 2015. This is near the start of the El Nino, and nothing to write home about. Reaching back to 2005 or earlier would be a good start.
- Green – This is the trendline from a local minimum in the slope around late 2004, early 2005. To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.
- Pink – This is the trendline from a local minimum in the slope from mid-2001. Again, the anomaly needs to drop below this line to start working back to a pause to that date.
- Red – The trendline back to a local minimum in the slope from late 1997. Again, the anomaly needs to drop below this line to start working back to a pause to that date.
NCEP/NCAR Daily Anomalies:
https://i0.wp.com/wattsupwiththat.files.wordpress.com/2017/03/daily.png?ssl=1&w=450
NCEP/NCAR Monthly Anomalies:
https://i2.wp.com/wattsupwiththat.files.wordpress.com/2017/03/monthly.png?ssl=1&w=450
If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year. NOAA, at 0.95°C for Q1, 2017, would be just below 2016 0.99C.
On RSS, I think you really need to switch to RSS TTT V4. They have been issuing warnings about V3,3 for a while now, and so it isn’t really worth analysing the ups and downs.
Is 3.3 going to be cancelled soon? I see that V4 global coverage is better than V3.3 (V4 ==> 82.5S to 82.5N; V3.3 ==> 70S to 82.5N). One thing I couldn’t find in a quickie Google session… what elevations (or pressure levels) do the 2 versions use?
I suggest you should draw your graphs so they so they at least an approximation to a valid continuous function reconstruction from sampled data.
An appropriate method would be to draw perfectly HORIZONTAL lines through each plotted data point and extending from the time axis center of the preceding time cell to the center of the following cell, and then drawing verticals to connect the horizontals. The result is a modification of the common ” sample and hold ” method of reconstructing a continuous function from its validly sampled data points, which simply places the horizontal segments across each cell from one sample to the next. A simple low pass filter can render the result as a reasonable respectable replica of the original band limited signal.
Connecting the plotted data points by straight lines is both mathematically invalid, but simply demonstrates an ignorance of sampled data system theory.
It gets tiresome observing this level of basic ignorance among so-called climate scientists.
G
If HadCRUT comes in at 0.817, it will also be in first place with a 2 month average of 0.779, narrowly beating 0.773 from 2016.
Remember the US non satellite records have an algorithm that recalculates the past records regularly. As Mark Steyn remarked at a Senate hearing, we still can’t predict what 1950’s temperature will be in 2100.
An “unstable” algorithm at that.
We might as well start a pool for how much March 1934 temperatures will change in this monthly report.
As Mark Steyn remarked at a Senate hearing, we still can’t predict what 1950’s temperature will be in 2100.
That is the most amazing sentence I have read in ages. The utter lunacy of arguing over a few hundredths of a degree when the entire data set is constantly changing argues persuasively that mankind exhibits very little intelligence.
“That gives 2017 a real shot at being the fourth consecutive record year.”
Nick, I think science should have nothing to do with records as they are often meaningless or even misleading. What matters is the trend.
Consider the drunkard’s walk, where each increment is random.
Clearly the overall trend is zero, as it is random, and yet our drunkard will magically generate one record after another.
Chris
[“If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year. NOAA, at 0.95°C for Q1, 2017, would be just below 2016 0.99C.“]
Nick, you might want to check your figures…
2016 GISS (using the same dataset as Walter):
Jan = 1.13
Feb = 1.32
Mar = 1.28
Average for Q1 = 1.24, not 0.98C
Against 2017 GISS:
Jan = 0.92
Feb = 1.1
Mar = 1.03 (projected)
Average for Q1 = 1.02C
Which would mean 2017 first quarter would most definitely NOT be the fourth consecutive record, and each of the first three months of 2017 would be cooler than each of the first three months of 2016.
Where did you get 0.98C from?
Erratum, where I’ve written 0.98C, that should read 1.02C.
“Where did you get 0.98C from?”
The average for the whole of 2016. The question is whether 2017 is on track to exceed that annual average. We knew 2016 would go down after March; we don’t know that for 2017.
@ Nick Stokes April 5, 2017 at 2:44 am
One thing we can be relatively certain about is that surface temperature adjustments will cool down the next 9 months, versus Jan/Feb/Mar. I did a post back in 2014 https://wattsupwiththat.com/2014/08/23/ushcn-monthly-temperature-adjustments/ where I graphed USHCN temperature adjustments separately for January, February… etc. The results for 1970-2013 are in the following graph (click to view original)

Whatever adjustment algorithm is applied by NOAA to USHCN is probably also applied to all data that NOAA receives.
So we can track anomalies to our anal heart’s content. What does it have to do w/ CAGW? Is coincidence still close enough for cause?
Walter,

From here, is the summary of RSS TTT
You’ll see the peak is about 4km. One of the things that happened going from UAH 5.6 to 6 is that the quoted level went from 2km to 4km. But it’s rather an arbitrary figure. As you see from the RSS diagram, it’s actually just a continuous weighting function, and takes in a wide range. The key issue is to avoid stratosphere (which behaves quite differently) and to avoid the large obscuring signal from the surface. That isn’t easy with TLT. You’ll generally see John Christy quoting TMT nowadays.
I’m at a loss as to the what value add this provides. Guessing what next month’s anomalies will be seems strange. There isn’t a variance analysis, which would be meaningless on a month-to-month basis, so I go back to what value does this provide?.
What am I missing?
Walter simply provides a “sneak peek” as to what we might expect to see come next week…
It isn’t just guessing. Walter should emphasise more that it is based on an integration of NCEP/NCAR reanalysis results for March.
I do mention the method, and that fact that the first 29 days of March NCEP/NCAR data is available.
Thanks for the tip about the last 2 images not auto-displaying. I’ll know better next time. I did them yesterday early afternoon to include March 29th data. They looked good in the staging area. When I selected “Preview”, WordPress gave me the blank “Beep beep boop” page with a spinner. Half an hour later, it was still spinning, and the browser status bar was madly updating about “contacting/connecting-to/witing-for/reading/receiving-data-from a zillion adservers. I had to go with it “sight unseen”.
I understand what you’ve said, but my point is month-to-month (or month-over-month) variance is not climate – it’s weather. As such, trivial variances are meaningless. The variances are so insignificant that any small number selected as the “forecast” is as good as any other small number (most seem to be within the range of instrument error).
I certainly do not mean to be rude to Walter, but I seriously wonder what the value is.
The people who make up this fake data need to keep feeding their families, so they just have to keep making new stuff up, even though it conveys no information. But as you can see, they can make up this fake data to three or four significant digits.
If they didn’t do that, just think of all the taxpayer’s grant money that would go unclaimed.
G
…. But as you can see, they can make up this fake data to three or four significant digits.
To my way of thinking, it is a shame to only have three or four significant digits when you are making up data. Hell, if you go to all the trouble to fake the data you should use at least 7 significant digits!
I have been saying for a few months that the cold ‘blobs’ that have replaced warm blobs in the Pacific (and developed in other oceans) would decouple global Ts from ENSO. The end of the California drought was vindication of this idea. Your forecasts are running too hot because of this.
Walter,
The reason your last pictures didn’t show is the query and stuff that follows in the URL. You don’t really need that anyway.
Here in SW France we still have the heating on at times. All but two years in last fifteen the heating has been turned off by early – mid March. Not a warm year so far.
It seems to me that all the alleged warming is where there is no one living and therefore no one to disprove the warm temps.
Whatever, there is certainly no emergency.
SteveT
You can get a good estimation of where the warming his via daily reanalysis updates from here: http://cci-reanalyzer.org/wx/DailySummary/#T2_anom
SW France and much of Spain was slightly cooler than average for the date over recent days but seems to be settling now. Most of the unusual warmth is in the far north of Europe, Asia and the Arctic; though Central Europe and China are also well above average. Antarctic is cooler than average.
Globally temperatures are estimated to be 0.63C above the 1979-2000 average for this date. This map shows today’s anomaly distribution:
http://pamola.um.maine.edu/fcst_frames/GFS-025deg/DailySummary/GFS-025deg_WORLD-CED_T2_anom.png
“Since the switchover in December, the actual satellite anomalies have paralleled their NCEP/NCAR subsets, but with a different offset than before. I looked into the problem and changed the projection method for the satellite data sets. the projections for RSS and UAH have been consistently too high the past few months.”
Step 1 you say subtract the February 2017 UAH subset NCEP/NCAR anomaly from the March number? By March are you referring to March 2016 but you have added in an offset.
Nick Stokes
“If your projection of 1.03°C for GISS is right, and it looks good to me, that will make the 2017 average for the first quarter 1.02°C. That contrasts with the record 0.98°C for 2016. That gives 2017 a real shot at being the fourth consecutive record year.”
Seeing as how El Nino has gone and the pacific warmth is unlikely to be a patch on 2016 the only way 2017 GISS could be warmer than 2016 is a very heavy thumb on the GISS algorithms. Bates showed that this may not be hard to do but in a Trump monitored, Lamar Smith overseeing world this will be extremely difficult to achieve. Not impossible given M Mann’s recent chutzpa but very difficult all the same.
To get to a 2017 El Nino, the forecast, from the La Nina that ended in Jan-Feb 2017, the ONI has to transition from negative numbers to positive numbers. If the GMST for the first 1/4 of 2017 exceeds the record annual mean of 2016, which it does, in the what easily could be the coldest 1/4 of 2017.
Walter – thanks as always. I look forward to this posting.
Please explain how one can take temperature data measured to the tenths of a degree and extract anomalies down to the thousandth of a degree? The CRUTEM4 temperature data set states the following about their measurements: “Year followed by 12 monthly temperatures in degrees and tenths (with -999 being missing)” Yet the anomalies show an accuracy down to the third decimal point. This is a statistical impossibility starting with measurements of only one decimal points.
I’ve heard the argument that using many measurements allows one to get better accuracy, but this is incorrect. Using multiple measurements allows one to determine the uncertainty in the mean to a finer precision than in the original measurements, but those measurements have to be of the same thing and at the same time and place. The uncertainty in the mean is also affected by the range of the measurements, and since the range in these measurements has to be more than ten degrees, there is no way to get the uncertainty in the mean to such precision.
I would love to see the equation that is used to determine these average anomalies, and hear the justification for using it.
James you are completely missing the point.
Every one of these ” data ” points is a unique event. It NEVER ever happens again; so there are NO multiple measurements to average.
It’s a perfect case where Statistical Mathematics is operating in the pure numerical Origami mode.
They just follow a rote algorithmic process, and the result means exactly nothing besides what they have defined it to mean beforehand.
GISSTemp is nothing more than GISSTemp. It has NO other meaning.
G
Well, that’s the reality take. I’d like to hear the logic of those who think there’s some statistical feat of magic that can make measurements with one decimal point be accurate to three decimal points.
George, you are an astronomer arguing with astrologers. Both look at stars and planets, but with vastly different results.
James Schrumpf
Average family size in the US is said to be 2.6 people. Everyone (I hope) understands that no one is claiming each household contains 2 people plus 0.6 of a person. Most people get it that the 0.6 is a statistical artefact arising from the averging process.
Likewise, no one is taking temperature measurements to tenths of a degree, nor is anyone claiming to do so. The precision comes from the averaging of different temperatures, many of which are measured to 0.5 of a degree.
You only need a few measurements to 0.5 C accuracy to get an average that extends to many more decimal places, never mind the thousands of such measurements those who make these estimations have at their disposal.
As for anomalies, since these are just differences from long term averages exactly the same principle applies.
“Likewise, no one is taking temperature measurements to tenths of a degree…” ‘Hundredths of a degree’, I should say.
DWR54 said:
“Likewise, no one is taking temperature measurements to tenths of a degree, nor is anyone claiming to do so. The precision comes from the averaging of different temperatures, many of which are measured to 0.5 of a degree.
You only need a few measurements to 0.5 C accuracy to get an average that extends to many more decimal places, never mind the thousands of such measurements those who make these estimations have at their disposal. ”
And yet time and time again this gets trotted out. Take DSD audio. DSD is 1-bit, but has a sampling rate of 2.8224 MHz. 1 bit, two possible values, and yet can reproduce high quality audio.
Could you show the formula used for that calculation?
That turns out to not be the case. Considering the “average size of a US household” calculation, to claim an accuracy of 2 significant digits when only one is in each measurement is incorrect. If you’re counting whole people, you can’t give an average of 2.6 people. You can calculate that figure, but to be statistically accurate it must be reported with the same number of significant digits, and so should be rounded up to 3. As you said, there can’t be 0.6 people in a household somewhere.
As for the thousands of measurements, to be valid for their claimed use, they must be a measurement of the same thing at the same time and place. One can measure the length of a board a thousand times with a ruler marked in millimeters, and the mean of the measurements will be +/- 0.5 mm. The multiple measurements allow one to calculate the uncertainty in the mean to more significant digits, but the uncertainty in the measurement remains the same +/- 0.5mm.
Thus, one could take a thousand measurements of a board and reduce the uncertainty in the mean to say its length was 47.7cm +/- 0.003mm, but the board would still only be measured to the +/- 0.5mm accuracy. The precision of the mean can be known to be within 0.003mm of that 47.7cm figure, but you can’t claim to have measured the board down to 47.700cm.
Finally, the same thing isn’t being measured anyway. A thousand measurements are being taken at a thousand different locations, and the mean is claimed to be “the average US (or global) temperature.” This is like measuring a thousand different boards in different places, and claiming you have the “average length” of a “board.”
Finally someone understands this BASIC mathematical principle—-besides me. I once had a Township civil engineer insist I design an earthen retention basin to four decimal places when the soil coefficient was 0.74. Seems he had this computer program…need I say more. LMFAO he didn’t like it.
It seems like a similar question arises in satellite measurements of ocean surface heights. The Jason 1, 2, and 3 satellites claim to measure distance to the ocean surface with an accuracy of 3 centimeters (after adjusting for orbit variability from center of Earth to an accuracy of about 1 cm), and from this to determine global annual rise in sea level of about 3 millimeters (+ or -). It’s hard to see how that can be ten times more precise than the original measurements.
What is interesting is that GISS is literally the only data set that will reportedly tick upwards. UAH and RSS meanwhile show a sizable downtick.
Why such a discrepancy, you would either have to throw out GISS when pertaining to temperature or you have to see it as literally the only record that can be trusted?
That’s not to mention the growing cold blobs in the oceans (anomalies anyway) and ENSO being neutral (the sea level anomalies so far also don’t show much promise for another big El Nino event this year). The anomaly charts of the WXMaps site also showed, for the first time in a long time it seems, long-lasting negative winter temperature anomalies in the North American arctic instead of a massive red blob. It could be that global cooling is starting or is about to start, but it will depend on what the next year of data brings.
I think the January drop in UAH and RSS in comparison to the surface indices had something to do with something that was specific to this past winter. Maybe the anomalously warm tropical/subtropical NE Pacific was causing deep convection before January but not since, maybe the issue was where and in what direction snow cover anomalies were. Or maybe the issue was the warm blob in the North Pacific being replaced by a cold one and air from that area was uplifted into the satellite-measured troposphere by storms. I think whatever it is, it will continue through April and then fade. If the issue is snow cover, I this will fade sooner. If the issue is tropical/subtropical ocean temperature patterns, I think this will continue through May and change while the ITCZ is moving northward in June. In any case, I think the February low satellite readings look like some sort of downward spike that I expect to not continue into March, maybe they were related to temperature and snow cover anomalies in North America that were less anomalous in March. So, I expect March UAH v6 to be around +.38-.39, and March RSS to be around +.47.
I was way off, and I wonder why. UAH is in, and it was +.19, which was .19-.2 degree less than I expected. It was .16 degree less than Walter Dnes expected, after he made a downward shift in his method of predicting UAH.
I wonder if the three surface figures will also be about .16 degree less than predicted by Walter Dnes. If they are close to his prediction instead, then there is a recently rapidly widening divergence between the Big 3 surface datasets and those of the satellite-measured lower troposphere. And if this happens, I wonder if all 3 of the surface datasets will be in such a rapidly widening divergence, or if HadCRUT4 will be close to .16 degree cooler than Walter Dnes’s prediction while the 2 American ones turn out close to his prediction.
Almost all the reanalysis results for surface temperature in March were about the same as Feb. My NCEP/NCAR (same as Walters) was down by just 0.01 °C, and others reported in the comments there were a little above. I’d expect a somewhat lower GISS; others probably closer to Feb.
Since the NCEP/NCAR re-analysis is quite close to the surface (995 mb pressure level), I expect HadCRUT4 and GISS and NCEI to track reasonably close to it. The satellite data represents the lower troposphere, which may not be an “apples-to-apples comparison” with the surface data sets.
This article is an example of looking at the leaves on the trees in the “forest” of climate change.
Two and three decimal places are nonsense for these data.
Even one decimal place is of questionable value, since most instruments used for surface measurements have a margin of error of at least +/- 0.5 degrees C.
This article is an example of unimportant issues that global warming believers want skeptics and “deniers” to focus on … while they are busy brainwashing the public and teaching children with their wild guess predictions of a coming climate change catastrophe.
One year is meaningless in the big picture of 4.5 years of continuous climate change.
One month of one year is meaningless too.
Even less meaningful, if that’s possible, is projecting monthly anomalies … rather than just waiting for the final data to become available.
I ask author Walter Dnes to consider if there is anything else he could be doing to advance the fight against climate scaremongering — something that would have more value than projecting monthly temperature anomalies …. when final data are almost complete … and before the regular after-the-fact “adjustments” begin changing the data !
Mr. Dnes’ article is a poster child for how to waste time and energy writing about climate change, since it only tells us what we already know — the climate is always changing, and humans can’t predict the changes (even very short-term changes).
Below are three statements of the the most basic climate science knowledge, Mr. Dnes,
hopefully to guide your next article towards more important aspects of climate change:
(1) The climate is always changing,
(2) Humans can’t predict climate changes, and
(3) Climate predictions are a waste of time unless you KNOW what causes climate change
(although predictions can be useful to scare people, and control them).
“4.5 years” in my original post should have been 4.5 billion years
I already have a draft of an article along those lines. Right now, it has a rather strident attitude. I’ll have to tone it down before submitting it to WUWT for review.
“Mr. Dnes’ article is a poster child for how to waste time…”
i think your comment is a poster child for how to waste time…
afonzarelli
We, or at least I am not trying to go out of my way to be rude to Walter (or anybody).
You have posted a rather snarky response to Richard Greene.
Please share what value you get from this article. Please enlighten us.
Looks like the pause is back.
http://www.drroyspencer.com/2017/04/uah-global-temperature-update-for-march-2017-0-19-deg-c/
+0.19 Wow. And to think that I was nervous about going from linear extrapolation, which would’ve given +0.520, to monthly delta, which gave “only” +0.351
HadCRUT4 for *FEBRUARY* (correct) is finally in at +0.851, versus my projection last month of +0.817.
More data in…
UAH final data is +0.185 (rounding off to +0.19 in the preliminary output)
RSS is +0.349 versus my reduced projection of +0.437. The straight linear extrapolation would’ve been out in left field at +0.579!