Over at JunkScience.com Steve Milloy writes:
Skeptic Setback? ‘New’ CRU data says world has warmed since 1998 But not in a statistically significant way.
Gerard Wynn writes at Reuters:
Britain’s Climatic Research Unit (CRU), which for years maintained that 1998 was the hottest year, has published new data showing warmer years since, further undermining a sceptic view of stalled global warming.
The findings could helpfully move the focus from whether the world is warming due to human activities – it almost certainly is – to more pressing research areas, especially about the scale and urgency of human impacts.
After adding new data, the CRU team working alongside Britain’s Met Office Hadley Centre said on Monday that the hottest two years in a 150-year data record were 2005 and 2010 – previously they had said the record was 1998.
None of these findings are statistically significant given the temperature differences between the three years were and remain far smaller than the uncertainties in temperature readings…
And Louise Gray writes in the Telegraph: Met Office: World warmed even more in last ten years than previously thought when Arctic data added
Some of the change had to do with adding Arctic stations, but much of it has to do with adjustment. Observe the decline of temperatures of the past in the new CRU dataset:
===============================================================
UPDATE: 3/21/2012 10AM PST – Joe D’Aleo provides updated graphs to replace the “quick first look” one used in the original post, and expands it to show comparisons with previous data sets in short and long time scales. In the first graph, by cooling the early part of the 20th century, the temperature trend is artificially increased.In the second graph, you can see the offset of CRUtemp4 being lower prior to 2005, artificially increasing the trend. I also updated my accidental conflation of HadCRUT and CRUTem abbreviations.
===============================================================
Data plotted by Joe D’Aleo. The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset), increasing the trend. Of course, this is just “business as usual” for the Phil Jones team.
Here’s the older CRUTem data set from 2001, compared to 2008 and 2010. The past got cooler then too.
On the other side of the pond, here’s the NASA GISS 1980 data set compared with the 2010 version. More cooling of the past.
And of course there’s this famous animation where the middle 20th century got cooler as if by magic. Watch how 1934 and 1998 change places as the warmest year of the last century. This is after GISS applied adjustments to a new data set (2004) compared with the one in 1999
Hansen, before he became an advocate for protest movements and getting himself arrested said:
The U.S. has warmed during the past century, but the warming hardly exceeds year-to-year variability. Indeed, in the U.S. the warmest decade was the 1930s and the warmest year was 1934.
Source: Whither U.S. Climate?, By James Hansen, Reto Ruedy, Jay Glascoe and Makiko Sato — August 1999 http://www.giss.nasa.gov/research/briefs/hansen_07/
In the private sector, doing what we see above would cost you your job, or at worst (if it were stock data monitored by the SEC) land you in jail for securities fraud. But hey, this is climate science. No worries.
And then there’s the cumulative adjustments to the US Historical Climatological Network (USHCN)
Source: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
All up these adjustments increase the trend in the last century. We have yet to witness a new dataset release where a cooling adjustment has been applied. The likelihood that all adjustments to data need to be positive is nil. This is partly why they argue so fervently against a UHI effect and other land use effects which would require a cooling adjustment.
As for the Arctic stations, we’ve demonstrated recently how those individual stations have been adjusted as well: Another GISS miss: warming in the Arctic – the adjustments are key
The two graphs from GISS, overlaid with a hue shift to delineate the “after adjustment” graph. By cooling the past, the century scale trend of warming is increased – making it “worse than we thought” – GISS graphs annotated and combined by Anthony Watts
And here is a summary of all Arctic stations where they cooled the past:. The values are for 1940. and show how climate history was rewritten:
CRU uses the same base data as GISS, all rooted in the GHCN, from NCDC managed by Dr. Thomas Peterson, who I have come to call “patient zero” when it comes to adjustments. His revisions of USHCN and GHCN make it into every global data set.
Watching this happen again and again, it seems like we have a case of:
Those who cool the past are condemned to repeat it.
And they wonder why we don’t trust them or their data.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


![1998changesannotated[1]](http://wattsupwiththat.files.wordpress.com/2012/03/1998changesannotated1.gif?resize=500%2C355)
![ts.ushcn_anom25_diffs_urb-raw_pg[1]](http://wattsupwiththat.files.wordpress.com/2012/03/ts-ushcn_anom25_diffs_urb-raw_pg1.gif?resize=640%2C494)


Tilo Reber says:
March 20, 2012 at 10:19 am (Edit)
Zeke: “This is all rather silly. Don’t like GHCN? Well, don’t use it! ”
No, Zeke, your assertion is all rather silly. First of all, BEST uses GHCN. And if you use GHCN data, then you can use what they call their “raw” data. But as they will tell you themselves, their raw data comes to them adjusted from their other sources. If you use other sources like BEST and exclude the GHCN data then you are getting data that is too unstable and fragmented for GHCN to use.
####################
more nonsense. GHCN Monthly was assembled long ago and the stations were selected long ago from what was available. Other sources, such as GHCN Daily, now contain more sources, updated daily. This data is not to “unstable” to include in GHCN Monthly; its simply not a part of the inventory. Over time more and more stations are being added to GHCN daily as agreements come on line and people deliver data. The colonial record wont be added in GHCN Monthly, The CRN network wont be added to GHCN Monthly and its records are super stable. triple redundant sensors, readings every 5 minutes.
Tilo, you dont know what you are talking about
Tilo Reber says:
March 20, 2012 at 9:59 am (Edit)
Mosher: “Yes, the modelling involved to get the “temperature” from the brightness at the sensor is not without assumptions. and assumptions bring with them uncertainty.”
It’s about more than just assumptions. They have been calibrated to Radiosondes that used real thermometers.
#########################################
you obviously havent read the calibration documents. And you dont understand the assumptions that go into the radiative physics that are used to MODEL the temperature.
You want to understand the structural uncertainty in UHA or RSS. LOOK AT THE HISTORY OF CORRECTIONS! that should be your first clue
Manfred says:
March 20, 2012 at 1:43 am (Edit)
Steven Mosher says:
March 19, 2012 at 12:43 pm
Its not surprising that when you add more Northern Latitude data that the present warms.
This has been shown before. It’s pretty well known.
As you add SH data you will also cool the past. This is especially true in the 1930-40 period as well as before.
———————————
Isn’t this a bit too simplistic?
In my view, a very good measure of warming is a comparison between the temperatures of last cyclical high in the 1940s and the recent cyclical high.
###############
bad choice Manfred. Look at the spatial distribution of measurements in the 30-40s against the spatial distribution of measurements now. What you will find is that the 30-40s over sampled the NH relative to the southern hemisphere. In other words, in the current period the NH and SH are
both well sampled. In the 30-40s, the SH was NOT sampled as well, which can lead to a over estimation of the warmth in that period.
There is more data to recover from old archives. Based on what we known about polar amplification and the unsampled regions, if you want to lay a bet, here is the bet you will
lay:
As more data comes into the system the past will generally cool. It will not stay the same. It can only be higher or lower. Given the existing sampling distribution, bet on lower. Just sayin.
The other reason why people should not be shocked by CRU series increasing is this.
1. We knew CRU was low because of the way they handled the arctic. I’ll dig up the
climategate mail on this. Also, this was discussed at RC in the past I believe.
2. Skeptical estimates using BETTER methods than CRU indicated it was low
http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/
3. Every estimate I have done ( adjustment free lads ) was higher than CRU.
REPLY: Note the updated graphs, such cooling of the past preceded the recent Arctic update. – Anthony
Glenn Tamblyn says:
March 20, 2012 at 4:06 pm
“Then the whole ‘march of the thermometers’ meme. You make reference to a decline in the number of ‘cooler’ sites, implying that this will introduce a systematic bias. Say What! To introduce a bias you would need to drop ‘cooling’ sites, not cooler ones. Remember, the temperature records are calculated based on temperature anomalies. We are looking for how much temperatures are changing, not their absolute value. Your point suggests that you think that global temperatures are averaged together to then see how much change there has been. And your right, if that were the case, dropping cooler stations would add a warm bias.
Which is exactly why it isn’t calculated that way. The records all work by comparing each station against its own long term average to produce an anomaly for that station. Only then are these anomalies averaged together. So removing a cooler station will only introduce a warming bias to the record if that station is also cooling. So how does removing high latitude stations in places like northern Canada, where there is high warming introduce a warming bias? If anything it will add a cooling bias.
Then you talk about using data where there wasn’t any. Since you are vague about what you mean here, I will assume that you are referring to the 1200 km averaging radius used by GISTemp. ”
Glenn, I don’t know who you are, so I don’t know who the “we” is you speak for; but what you’re saying to dicredit the “death of the thermometers” meme makes no sense.
First you say that “we” is only interested in looking at how a station compares to itself; later you mention why GISTemps 1200 km radius smoothing is supposed to work.
Think for a moment. How can the globally extrapolated and gridded anomaly NOT be affected by the geografically systematic Death Of The Thermometers.
You should not make the mistake of mentioning that first meme (that “we” is only interested in comparing like with like) in the same comment as the second meme (the GISTemps 1200 km smoothing makes no difference).
It becomes too obvious when you do.
Largest Area of the Arctic is surrounded by Russia.
http://nordpil.com/static/images/arctic_topographic_map_full.jpg
For political and more importantly strategic reasons they always inflate the temperature data.
e.g. http://www.vukcevic.talktalk.net/69-71.htm
@Vukcevic:
Gee… until recently, wasn’t the EU shipping boat loads of money to the ex-USSR for “carbon credits”? Think that might be an inducement to continue showing the world was warming and carbon credits were needed?…
As soon as I see that, I flag it as a probable Troll or at a minimum “True Believer Steeped In The Propaganda Talking Points”. Since when is science about “debunking” instead of presenting evidence? “debunk” is a propaganda term, IMHO, as used these days.
The necessary conclusion from that line of reasoning is that quality is irrelevant. Any old crappy stations in high UHI area and with grass fields replaced by airport tarmac is just fine…
The more correct conclusion would be that the “looking at” was not done very well.
If only that were true… Look, I’ve wandered through the rats nest of code that is GISTemp. It does NOT start with anomalies. It starts with temperatures.
It then interpolates them, homogenizes them, and eventually, near the end, makes Grid Box Anomalies out of them. But the anomaly process comes much nearer the end than the beginning. LONG after a load of infilling, merging and averaging is done. Heck, even the “data” it starts from, the GHCN and USHCN are temperatures created by a bunch of averaging and adjusting and homogenizing steps (including, BTW, something called a Quality Control that amounts to saying that if the temperature is too far away from the expected (and ill conceived notion…) it will be replaced with an AVERAGE of nearby ASOS stations. Yes, the Procrustian Bed to which all data must be cut is an average of airports…
BTW, I did make a process that started with the very first step being “make an anomaly”. It showed that different months were doing different things. Some warming, some cooling. Sometimes adjacent stations were going in different directions. My conclusion was that the adjustments are the source of any aggregate ‘trend’. Oh, and the way different instruments are spliced together in what is laughably called “homogenizing”. It’s largely a splice artifact dressed up in fancy clothes.
So don’t go pulling the “Anomaly Dodge”, because the temperatures stay temperatures to very near the end. Long after boat loads of math have been done on them.
It’s actually much more subtle that that. COLD stations are kept in during the baseline cold period (the 50s to 70s were cold). That forms a ‘grid box’ fictional temperature. (As there are only about 1200 active GHCN stations in the present, and either 8000 or 16000 ‘grid boxes’ depending on what era of code is used, most boxes are by definition a fabricated value). Later in the code the present temperatures are used to fabricate more grid box values. it is those grid box values that are used to create an “anomaly”… of a thing that does not exist…
By having cold stations in the early data, the baseline is kept cold. By having them gone, later, the grid boxes are filled in from other stations. Now this is the fun bit. The remaining stations are all in lower volatility areas, so can never become as cold as the original stations (that were in places like mountains with greater temperature ranges). The “Reference Station Method” claims that it can do this without error. It can’t.
There isn’t space to go into it here, but the “method” is used recursively (3 times by my count). No paper justifies recursive use. It is used on “grid boxes”. The paper justifying it was based on real thermometers, not fabricated fictions (that themselves may be filled in from 1200 km away) The baseline is calculated with the thermometers from the cold period, then applied in other PDO regimes. (So do you REALLY think thermometer have the same relationship when the jet stream is, on average, flat; vs when it is very “loopy”?) Nothing justifies holding that a relationship set in one PDO / AMO phase will be identical in the other phase. And so much more…
This is, how to put it politely… no, “lie” would imply you know it’s balderdash… Flat Out Wrong. The code does no such thing. Read it. (The code that is. I have.) Temperatures are kept AS TEMPERATURES through a load of homogenizing, infilling, averaging, and Reference Station Method steps (including a very badly done attempt at UHI “correction” that often gets it backwards). AFTER converting to “grid boxes” anomalies are created, but those grid boxes are predominantly FICTIONAL (as 1200 thermometer don’t fill more than 1200 boxes – often less – and there are many thousands of them to fill…)
OR if it is a cold station in the baseline for that grid box and is replaced with a warmer station in the present for that grid box via the Reference Station Method infilling process. THEN the anomaly is calculated.
As noted above, the RSM is applied 3 times in a row. There is no paper to justify that. It is applied using one set of stations in the baseline, a different set in the present. There is no paper to justify that. An ever decreasing and shrinking set of stations, not a well matched set and consistent over the test period as was used in the paper.
The comparison in the RSM paper was over a very short period of time (so one phase of the PDO / AMO state). There is no justification for using it over a 40 or 50 year period when relationships change.
Furthermore, most stations in the baseline were in places with a variety of environments, but often grassy, with trees, or otherwise cool. NOW almost all GHCN station are over or near the tarmac at airports. To say an airport tarmac can fill in for grass and trees is just dumb. (And oh, BTW, many of those airports were very small and sometimes grassy in 1950… now some are major International Airports. See the ones in Hawaii, for example.)
To say the relationship of a grass field to a nearby mountain is unchanged over 50 years as one becomes an airport runway and the other may now be on the other side of a Rossby Wave (as flat vs loopy jet stream changes with PDO / AMO) is just ONE example of the silly assertion made by implication.
I’ll skip your nice sounding but silly examples of similarity. I’ve done comparisons of stations and found that the relationship often will invert. SFO vs Reno for example (or vs Sacto). When inland temperatures change, fog can be pulled over SFO. Sometimes not. As longer term cloud levels change, the degree of non-correlation rises.
Bull. We’re already below Nyquist limits as it is (and by quite a margin). We have too low a sample size to say anything meaningful.
I’m not going to bother with the rest of your comment. The pointer to a rather mindless “read dozens of pages of tripe” link is a standard “Troll Fodder Flag”.
When I first came at the AGW issue it was with a “Gee it must be bad, I need to learn more” and got sucked into that dodge way too many times. Dredging through dozens (hundreds?) of links to ever more mind numbing mumbojumbo that never quite managed to get to the meat of things. Lots of loose ends that never were quite tied off. Lots of smooth sounding ‘talking points’ that never quite sealed the deal.
No Thanks.
As of now, I’ve found much clearer and much more complete sources (most are here in various links scattered over a few years worth of postings, but decent search terms will pick them out).
I also sunk a couple of years of my life into GISTemp and GHCN. “Digging In” to it myself.
What I found was false assertions (such as that “it is all done with anomalies’ when it clearly isn’t) and papers supporting one thing stretched out of all proportion in the code. ( So the RSM is justified for a few selected stations in ONE climate regime UP TO 1200 km MAX; then is applied RECURSIVELY three times in a row (so data might be smeared up to 3600 km) and is applied to FICTIONAL ‘grid boxes’ not real geographies, and is applied across very large variations in climate regimes. All unjustified by any scientific investigation.
So, no, I’m not buying your song.
Particular “issues” directly related to GIStemp”
http://chiefio.wordpress.com/category/agw-and-gistemp-issues/agw-gistemp-specific/
Problems in the GHCN:
http://chiefio.wordpress.com/category/ncdc-ghcn-issues/
The source code and technical issues from the version of GISTemp I ported (a bit dated now, but as some of their code is clearly from the 1970s, it doesn’t change fast…)
http://chiefio.wordpress.com/category/gisstemp-technical-and-source-code/
The results of my “anomaly first” tests:
http://chiefio.wordpress.com/category/dtdt/
And a whole lot more scattered around in my “notebook” site…
Simply put, the GHCN is relatively buggered, the GISTemp code is crap and worse, and CRUTemp has lost their raw data, their code is crappy (see “Harry README”) and they can’t recreate anything.
On THAT, I’m not willing to bet the fate of the global economy.
I rather prefer that my history not change… I really hate it when the past keeps getting re-written. It makes me think about the USSR and airbrushing inconvenient people out of photographs… Just sayin…
Yep, but we won’t know for sure without testing for significance, and there is some pretty wild variation at either end of the record – particularly with RSS TLT.
Science must yield to better information, otherwise it is just dogma.
If you can’t deal with revisions you shouldn’t do science.
Alexej Buergin says: March 19, 2012 at 1:12 pm
Re Mosher
quote
So what was (according to Moshtemp) the average temperature in Reykjavik in 1940: 5°C or 3°C?
unquote
Not just Reykjavik: there is a whole suite of islands which were recording temperatures during the WWII blip and the subsequent fall in temps. I’ve just come back from Madeira and overflew the western edge of Spain. From 30+kft you can see Gib, Morocco, Spain, Portugal and France, all westerly facing, all with records which can be compared with the new adjusted temperatures. Add Iceland, the west coast of Ireland, the Faroes, etc etc
Either the original record was sloppily done — no ground truthing — or the new record is sloppily done. Or, I suppose, both. But perhaps I am maligning the paper and it covers this point exhaustively.
Smoothing the blip has one other problem: the contemporary windspeed changes match the temperature blip, so the handwave needs to find some explanation for that as well. Insulated anemometers anyone?
JF
DirkH says:
“Glenn, I don’t know who you are, so I don’t know who the “we” is you speak for; but what you’re saying to dicredit the “death of the thermometers” meme makes no sense.
First you say that “we” is only interested in looking at how a station compares to itself; later you mention why GISTemps 1200 km radius smoothing is supposed to work.
Think for a moment. How can the globally extrapolated and gridded anomaly NOT be affected by the geografically systematic Death Of The Thermometers.
You should not make the mistake of mentioning that first meme (that “we” is only interested in comparing like with like) in the same comment as the second meme (the GISTemps 1200 km smoothing makes no difference).
It becomes too obvious when you do.”
DirkH,
I think you misunderstand and haven not followed the independent assessments of the
so called “death of thermometers”
1. The sampling of thermometers will only effect the trend IF the thermometers that drop out
DIFFER in trend from those retained. With the death of thermometers those dropped
tended to be from higher latitude ( higher trend ) stations. If it had any effect it would be a COOLING effect.
2. We calculated and presented results on this site that show the drop had no effect
3. We’ve added stations effectively removing the drop and show no difference.
4. I’ve done reconstructions that only use rural stations, recons that only use long stations series
( 500 stations), recons that only use 100 stations, no difference.
Why? because as long as you have a reasonable sampling of the earth north to south you will get the same answer even with very few stations. Been there, done that, proved that.
Now, you go back to 2007 when I first started looking at this and I was as skeptical, if not more skeptical, than many hear: Skeptical about “rounding”, skeptical about the number of stations,, skeptical about adjustments, skeptical about siting, about UHI, you name it.
None of these concerns amounted to mousenuts. Get the data ( I had to fight for it ) its now freely avaliable. Get the code–I had to fight for it, you can use the code I helped free, or you can use the code I make freely available or you can write your own, And do some work
I downloaded yesterday all the new station data for CRUTEM4(Hadcrut4) and then compared the results with the old CRUTEM3 station data. There are 5549 stations in the set compared to 5097 in CRUTEM3. 738 new stations have been added while 286 stations have been discarded. Those added are all mainly in northern Russia. Quite a lot of stations from North America have been discarded. There are none added but some lost in the southern hemisphere despite even sparser coverage than the arctic. The changes to the global anomalies are small and statistically insignificant. However they do psychologically change the impression of “warming” over the last 15 years – moving 2010 and 2005 up a bit and 1998 down a bit. Also the 19th century data has got just a tiny bit cooler. Statistically there has been no warming for about 15 years – but now they can say that 2010 was “warmer” than 1998 by 0.01 +- 0.05 degrees ! You can read more about this and also see where the new stations are at my blog.
Joe D’Aleo provided some updated graphs which I have posted just now.
Anthony Watts says:
March 21, 2012 at 10:13 am
The new CRUTem4 is in blue, old CRUTem3 in red, note how the past is cooler (in blue, the new dataset, compared to red, the new dataset)
I believe red and blue was mixed up here. I think it should read: (changes in bold)
The new CRUTem4 is in RED, old CRUTem3 in BLUE, note how the past is cooler (in RED, the new dataset, compared to BLUE, the OLD dataset)
clivebest says:
March 21, 2012 at 9:59 am
I downloaded yesterday all the new station data for CRUTEM4(Hadcrut4) and then compared the results with the old CRUTEM3 station data. There are 5549 stations in the set compared to 5097 in CRUTEM3. 738 new stations have been added while 286 stations have been discarded. Those added are all mainly in northern Russia. Quite a lot of stations from North America have been discarded. There are none added but some lost in the southern hemisphere despite even sparser coverage than the arctic. The changes to the global anomalies are small and statistically insignificant. However they do psychologically change the impression of “warming” over the last 15 years – moving 2010 and 2005 up a bit and 1998 down a bit. Also the 19th century data has got just a tiny bit cooler.
Interestingly your close-up of Crutem3 and Crutem4 since 1990 shows differences from D’Aleo’s graph, D’Aleo’s graph shows Crutem4 lower than Crutem3 whereas yours shows them virtually identical, any thoughts?
@ur momisugly Phil
My two curves CRUTEM4 and CRUTEM3 are calculated directly from the two full sets of station data (~5000 in each) using exactly the same algorithm as provided by UK Met Office. So this should show directly any systematic differences between the two datasets.
I just checked on CRU’s website and for some reason they don’t have the Global average of CRUTEM4 data available for download – just SH and NH. But GL is simply (SH+NH)/2
It could be that he is using CRUTEM3V and CRUTEM4V. Th V stands for “variance correction” . They write the following: “the method of variance adjustment (used for CRUTEM3v and HadCRUT3v) works on the anomalous temperatures relative to the underlying trend on an approximate 30-year timescale.” In other words this looks like a smoothing algorithm to suppress “outliers”, and assumes an “underlying trend”. I prefer to work with the raw temperature data from the stations themselves.
For example: Before 1900 the method of normalisation for anomalies (subtracting monthly variations) introduces large systematic errors. See this graph where the blue points use normalisation within a single grid point compared to CRUs per station.
The Gleik effect….
E M Smith (Chiefio)
I spent some time on your site a fair while back trying to see if you had anything worthwhile to say. I walked away very unimpressed.After drowning your audience in endless masses of tables most of your readers wouldn’t know up from down. So I went to look at the core of the issue – the algorithm from Hansen & Lebedeff 1987 and the source file where it was implemented – Step3/to.SBBXGrid.f, a Fortran File.
I then read your ‘analysis’ of this file. You went through describing the shell script file and what it does. Then you described the header of the Fortran file, the text description of the program. But when it came to you describing what the actual code does, by reading and understanding that code, your description degenerates into a lot of hand waving but not actually any information. I have been back to your site several times since and you have never updated that description.
Personal opinion. I don’t think you actually understand what the RSM method does. You have never demonstrated an understanding of it in anything I have seen you write.
If you sit down and go throught the RSM carefully, from H&L87, following what it does through each iteration of the calculation you see the following:
1. The average over the baseline period are calculated for each station being analysed.
2. One station is selected as the reference station.
3. Then for station 2,
3.1 The difference between the average of Station 1 and the average of station 2 is calculated.
3.2 The weighting for station 2 is calculated, depending on how far from the cell centre it is located. 1 at the center out to 0 at 1200 km.
3.3 The data for station 2 has the difference between 1 & 2 calculated in step 3.1 subtracted from it. The effect of this is to produce a value that is now relative to station 1’s baseline rather than station 2’s. THIS IS THE CRITICAL STEP. The data from station 1 and station 2 now have a COMMON AVERAGE VALUE. They could be regarded a being one station
3.4 Then the values from Station 1 & Station 2 are combined together with area weighting applied to the values from station 2 as they are added.
3.5 Finally a new common average for the baseline period is calculated for the combined data from stations 1 & 2.
Then steps 3.1-3.5 are repeated for stations 3-n.
Finally the average for station 1, which is now the average for all stations data after the adjustment in step 3.3 is subtracted from the calculated weighted average to produce an anomaly value. However what needs to be understood about this process is that because data from different stations have been adjusted in step 3.3 before being averaged, all that data is now relative to a common reference value and is mathematically equivalent to an anomaly based on that reference value. If step 3.3 didn’t occur where it does, this process would be producing an Anomaly of Weighted Averaged Temperatures. But because of step 3.3, what it produces is the Weighted Average of Temperature Anomalies.
Now I freely admit the algorithm is not that clear, and having the different aspects – anomalies, weighting and averaging all mixed together in the algorithm does make it hard to understand. And its not the prettiest Fortran code I have ever seen. But beneath the messiness of how they have done it, GISTemp IS calculating and Average of Anoalies.
This is also born out by the fact that others such as the Clear Climate Code project have been able to take the GISTemp code, rewrite it and clean it up in Python (including finding a few minor bugs in the process) and produce the same result. And that many independent analyses of the temp record, culminating in BEST have also produced essentially the same result.
A couple of other points Chiefio. You have stated that the 1200 km weighting average is hard coded. THIS IS NOT TRUE. It is the default but it is over-ridden by a command line parameter in the shell script file do_comb_step3.sh
”
label=’GHCN.CL.PA’ ; rad=1200
if [[ $# -gt 0 ]] ; then rad=$1 ; fi
…… Several lines setting up input files ….
${fortran_compile} to.SBBXgrid.f -o to.exe
to.exe 1880 $rad > to.SBBXgrid.1880.$label.$rad.log
”
Also you have claimed that the 1200 km weighting average when used on islands then sets the temperature for the surrounding ocean out to large distances. THIS IS SIMPLY NOT TRUE. Again the shell script for step 5 where the land and ocean data is merged sets a default of 100 km. Beyond this distance from land the ocean data is used instead. see here from do_comb_step5.sh:
”
RLand=100 ; # up to 100km from a station, land surface data have priority over ocean data
if [[ $# -gt 0 ]] ; then RLand=$1 ; fi
…… Several lines setting up input files ….
$FC SBBXotoBX.f -o to.exe
to.exe $RLand 0 > SBBXotoBX.log
”
And if you actually look at the results from GISTemp, calculate data using land only, ocean only and combined then look at the gridded data output you see clearly that data from islands DOES NOT extend out over large distances of the ocean because the SST data is used instead.
So you are quite simply wrong on both those points. Out of what looks like simple carelessness. Not actually reading the code CAREFULLY.
So you may have built some notoriety for yourself but it looks like is may have been founded on a fairly flimsy basis.
With so many other studies contradicting you, why should anyone take you seriously?