Guest Post By Walter Dnes:
There have been various comments recently about GISS’ “dancing data”, and it just so happens that as GISS data is updated monthly, I’ve been downloading it monthly since 2008. In addition, I’ve captured some older versions via “The Wayback Machine“. Between those 2 sources, I have 94 monthly downloads between August 2005 and May 2014, but there are somegaps in the 2006 and 2007 downloads. Below is my analysis of the data.
Data notes
- I´ve focused on the data to August 2005, in order to try to make this an apples-to-apples comparison.
- The net adjustments between the August 2005 download and the May 2014 download (i.e. the earliest and latest available data). I originally treated 1910-2005 as one long segment (the shaft of the “hockey-stick”). Later, I broke that portion into 5 separate periods.
- A month-by-month comparison of slopes of various portions of the data, obtained from each download.
- Those of you who wish to work with the data yourselves can download this zip file, which unzips as directory “work”. Please read the file “work/readme.txt” for instructions on how to use the data.
- GISS lists its reasons for adjustments at two webpages:
- The situation with USHCN data, as summarized in Anthony´s recent article , may affect the GISS results, as GISS global anomaly uses data from various sources including USHCN.
In the graph below, the blue dots are the differences in hundredths of a degree C for the same months between GISS data as of May 2014 versus GISS data as of August 2009. GISS provides data as an integer representing hundredths of a degree C. The blue (1880-1909) and red (1910-2005) lines show the slope of the adjustments for the corresponding periods. Hundredths of a degree per year equal degrees per century. The slopes of the GISS adjustments are…
- 1880-1909 -0.520 C degree per century
- 1910-2005 +0.190 C degree per century
The next graph is similar to the above, except that the analysis is more granular, i.e. 1910-2005 is broken up into 5 smaller periods. The slopes of the GISS adjustments are…
- 1880-1909 -0.520 C degree per century
- 1910-1919 +0.732 C degree per century
- 1920-1939 +0.222 C degree per century
- 1940-1949 -1.129 C degree per century
- 1950-1979 +0.283 C degree per century
- 1980-2005 +0.110 C degree per century
The next graph shows the slopes (not adjustments) for the 6 periods listed above on a month-by-month basis, from the 94 monthly downloads in my possession.
- 1880-1909; dark blue;
- From August 2005 through December 2009, the GISS data showed a slope of -0.1 C degree/century for 1880-1909.
- From January 2010 through October 2011, the GISS data showed a slope between +0.05 and +0.1 C degree/century for 1880-1909.
- From November 2011 through November 2012, the GISS data showed a slope around zero for 1880-1909.
- From December 2012 through latest (May 2014), the GISS data showed a slope around -0.6 to -0.65 C degree per/century for 1880-1909.
- 1910-1919; pink;
- From August 2005 through December 2008, the GISS data showed a slope of 0.7 C degree/century for 1910-1919.
- From January 2009 through December 2011, the GISS data showed a slope between +0.55 and +0.6 C degree/century for 1910-1919.
- From January 2012 through November 2012, the GISS data showed a slope bouncing around between +0.6 and +0.9 C degree/century for 1910-1919.
- From December 2012 through latest (May 2014), the GISS data showed a slope around 1.4 to 1.5 C degree per/century for 1910-1919.
- 1920-1939; orange;
- From August 2005 through December 2005, the GISS data showed a slope between +1.15 and +1.2 C degree/century for 1920-1939.
- From May 2006 through November 2011, the GISS data showed a slope of +1.3 C degree/century for 1920-1939.
- From December 2011 through November 2012, the GISS data showed a slope around +1.25 C degree/century for 1880-1909.
- From December 2012 through latest (May 2014), the GISS data showed a slope around +1.4 C degree per/century for 1880-1909.
- 1940-1949; green;
- From August 2005 through December 2005, the GISS data showed a slope between -1.25 and -1.3 C degree/century for 1940-1949.
- From May 2006 through December 2009, the GISS data showed a slope between -1.65 and -1.7 C degree/century for 1940-1949.
- From January 2010 through November 2011, the GISS data showed a slope around -1.6 C degree/century for 1940-1949.
- From December 2011 through November 2012, the GISS data showed a slope bouncing around between -1.6 to -1.7 C degree/century for 1940-1949.
- From December 2012 through latest (May 2014), the GISS data showed a slope bouncing around between -2.35 to -2.45 C degree per/century for 1940-1949.
- 1950-1979; purple;
- From August 2005 through October 2011, the GISS data showed a slope between +0.1 and +0.15 C degree/century for 1950-1979.
- From November 2011 through November 2012, the GISS data showed a slope bouncing around between +0.2 and +0.3 C degree/century for 1950-1979.
- From December 2012 through latest (May 2014), the GISS data showed a slope around +0.4 C degree per/century for 1950-1979.
- 1980-2005; brown;
- From August 2005 through November 2012, the GISS data showed a slope of +1.65 C degree/century for 1980-2005.
- From December 2012 through latest (May 2014), the GISS data showed a slope around +1.75 to +1.8 C degree per/century for 1980-2005.
- 1910-2005; red;
- This is a grand summary. From August 2005 through December 2005, the GISS data showed a slope of +0.6 C degree/century for 1910-2005.
- From May 2006 through December 2011, the GISS data showed a slope of +0.65 C degree/century for 1910-2005.
- From January 2012 through November 2012, the GISS data showed a slope bouncing around +0.65 to +0.7 C degree/century for 1910-2005.
- From December 2012 through latest (May 2014), the GISS data showed a slope of +0.8 C degree per/century for 1980-2005.
In 7 years (December 2005 to December 2012), the rate of temperature rise for 1910-2005 has been adjusted up from +0.6 to +0.8 degree per century, an increase of approximately 30%.
Commentary
- It would be interesting to see what the data looked like further back in time. Does anyone have GISS versions that predate 2005? Can someone inquire with GISS to see if they have copies (digital or paper) going further back? Have there been any versions published in scientific papers prior to 2005?
- Given how much the data has changed in the past 9 years, what might it be like 9 years from now? Can we trust it enough to make multi-billion dollar economic decisions based on it? I find it reminiscent of George Orwell’s “1984” where;
“Winston Smith works as a clerk in the Records Department of the Ministry of Truth, where his job is to rewrite historical documents so they match the constantly changing current party line.”
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



Dear Mr. Dnes,
I discovered in March 2012 that NASA-GISS changed retro-actively their temperature data offered in March 2010. I had downloaded the 2010-data and put into my archives. Between March 2012 and January 2013 I evaluated the discrepancies between 2010-data and 2012-data for 120 stations. The results are dealt with in a comprehensive paper. Meanwhile a German version has been successfully peer-reviewed and will be published still this year. I prepared also an English Version and send it, among others, also to Mr. Watts but got no answer. I would like to send you that paper – perhaps we can cooperate – if I had your e-mail address. Please send it. Many thanks in advance and best regards
Friedrich-Karl Ewert
Richard,
Here is the current GISTEMP log of each updates where they had problems of one sort or another and what they did about it – makes interesting scanning though most data updates seem to go through with no problem – http://data.giss.nasa.gov/gistemp/updates_v3/.
The pre-2011 update records are here – http://data.giss.nasa.gov/gistemp/updates/.
Here’s a overview description of what they do with the data – http://data.giss.nasa.gov/gistemp/sources_v3/gistemp.html
If you want the current code which does all these things it is linked to from here – http://data.giss.nasa.gov/gistemp/sources_v3/
And here’s a FAQ page – http://data.giss.nasa.gov/gistemp/FAQ.html
Peter:
Thankyou for your reply to me at July 5, 2014 at 3:46 am.
I appreciate your links which may be of use to others who were not aware of them.
There are two issues your post ignores.
Firstly, the GISS changes continue to occur almost every month and have done this. Such major changes invalidate any assessments based on the data from earlier ‘versions’.
Secondly, in response to your having written
I replied
Your post purports to be a response to my reply but fails to mention my reply.
Richard
Friedrich-Karl Ewert says:
> July 5, 2014 at 2:58 am
>
> Dear Mr. Dnes,
> I discovered in March 2012 that NASA-GISS changed retro-actively their
First, a note that I do not have a university degree. My only post-secondary piece of paper is a community-college certificate in computer programming many years ago. Beyond that, I’m an interested layman. My main talent is number-crunching data. Sort of like “Harry the programmer” in ClimateGate. I don’t have scientific/statistical background to do an in-depth analysis on my own. For instance, in this article, I simply took available data, reformatted it for importing into a
spreadsheet, and plotted the data. I’m willing to help within my abilities, but I wanted you to know my limitations.
By the way, the contact form on your website is apparently “under construction”.
Peter says:
July 5, 2014 at 1:44 am
I was actually referring to all the temp dataset holders – which have been shown elsewhere to be erroneous (can’t remember the thread, but I had to point out that BEST di use adjusted data despite Mosh and others protestations to the contrary)
Returning to GISS in particular, I have no idea if GISS do adjustments ‘on’ adjustments themselves (but we know others do!) but my primary point was to say that with ALL the datasets, and remember that many of which are cross correlated to some degree as they share station data, etc – I have never YET seen a raw dataset, followed by an adjusted dataset COMPLETE with description of each and every ‘wave’ of adjustments (and I don;t mean a generic type description – I mean a station by station algorithmic and then manual QC check followed by careful checking of details (e.g. does the station still exist even though there is missing data! see recent wuwt thread! LOL) and recording of said findings and finally explanation of the adjustment made and why. Simply put – without such detail – any data made available or used in any scientific capacity (such as say, ;proving’ CAGW – lol again!) is scientifically invalidated as the scientific method is clearly being ignored.
When the harryreadme txt file was released it clearly shows that no one knows the situation with crutemp or hadcrut (cant remember which one) and hence the data is unverified – or more specifically UNVERIFIABLE in any way shape or form! (this of course, bearing in mind that Jones stated the raw data was not available anymore!).
I have no problem with adjustments – or more specifically ‘corrections’ being made – but it you cannot produce a full traceability trail for the ‘data’ – and I mean FULL – forget it…..without such traceability, the data and the data pushers are putting themselves up for ridicule!
Well, seeing as how WUWT regularly has his facts wrong, blunders in essential methodology (sorting datasets before computing correlation) and so on, I do not see why I should immediatly believe wat they say here.
[Reply: State which facts are wrong. ~ mod.]
Paai
Kev-in-UK said “However, when said data is used and upheld as scientifically valid – that changes the game – it must be reproducible and proven as valid. Frankly, to this day, I don’t think we have reliable data – and certainly not without questions as to its history or validity! Ergo, in almost any other scientific endeavour, the ‘results’ or ‘conclusions’ based on such data would be thrown out or at best held in very low esteem (think wagonload of salt here!)”
Kev,
Sure we’d all like perfect historical data with proof that is is valid, but for the historical readings we have got only what we have got, so have to make the best of it – you can’t go back in time to do individual station equipment quality checks. All you can do is to quality check your current equipment and then post-process and adjust the historical observations to see what you can get out of them.
“Throwing out” is not an option – we have no better data, and “esteem” should be expressed in more statistical terms.
Specifically, with post processing and adjustment of station readings you should not only emerge with a best estimate as to what the temperature actually was at that time, but also with a statistical calculation of the expected error in the readings. That error estimate can be used to tell you how much confidence to place in any results you get when you use the figures, and what the likely bounds of the results should be.
So maybe a scientist does a sound piece of research using the adjusted data, including a detailed error calculation, and produces a well-written paper containing, maybe a trend with error bars. What happens then? Well it probably gets published in a reputable peer-reviewed journal which only makes articles available online behind a paywall. Scientists and academics have access because their institutions subscribe, but general members of the public (who have often paid for the research in their taxes) would have to pay $20-30 dollars per throw to get at such stuff, and who can afford it, particularly since, despite reading the abstract, you don’t necessarily know until after you have bought a paper whether it really was worth it. Such is life!
Richard, splitting my response to your post this part relates to your statement “Firstly, the GISS changes continue to occur almost every month and have done this [link supplied]. Such major changes invalidate any assessments based on the data from earlier ‘versions’.”
The link is interesting because it brings out a few points.
Firstly, the 1987 adjustments were made more than ten years before Mann published his hockey stick paper. In those days everyone assumed that temperature variations were more or less random with no detectable long-term trend, but in a few thousand years time there was likely to be an ice age. In my mind is an image of Hansen poring over the temperature data and asking himself whether anyone but a few academic researchers was ever going to be interested in a new graph of adjusted GISS historical temperature records.
So even if you believe an some sort of AGW conspiracy post Mann, it clearly was not in existence in 1987, so whoever was responsible for the data set changes clearly thought it was the right thing to do.
Since it is almost impossible to attribute an ulterior motive to the 1980 / 1987 differences, why is there such keenness to do that for the 1980 / 2007 or 1987 / 2007 differences? Why attribute a change of motive?
Secondly, the most important thing is surely whether the most recently published version is correct. The existence of older versions with known flaws does not change the confidence in the most recent version, and there should be a an accompanying set of error data with it to allow error bars to be placed on results using that data.
Thirdly, although the data set does change regularly, in general any significant set of changes is denoted by giving a data set a new version number. Here is the scheme used for the GHCM dataset which is used as source data by the GISS processing :-
‘The formal designation is ghcnm.x.y.z.yyyymmdd where
x = major upgrades of unspecified nature to either qc, adjustments, or station configurations and accompanied by a peer reviewed manuscript
y = substantial modifications to the dataset, including a new set of stations or additional quality control algorithms. Accompanied by a technical note.
z = minor revisions to both data and processing software that are tracked in “status and errata”. ‘
It is surely only worth repeating any assessments when the x or y control number changes. This does not seem to be on a monthly basis, but rather less frequently than annually. Further, if the results from using a new subversion (change in y) are pretty similar to those from the previous subversion then again it is not worth a full assessment. In other words the guys supplying the data go out of their way to make life easy for you.
RichardSCourtney said ‘The asserted “total waste of time” provides unquantified error to every analysis which uses the data set. No scientist can find that acceptable.
So, it IS “important to know what errors were present in every single one of the past versions of data”. And that can only be determined in falsifiable manner by knowing “exactly how correction of these errors has changed every single station reading”‘
My reading of your statement is that you are saying that you wish to know about every single station reading in every single version of a temperature data set because somehow it affects the errors present in the most recent version of the data set. If that is not what you are saying then you had better correct me.
Each new version of GISTEMP or UKHCN stands alone. The metadata documents all errors anomalies and identifiable inaccuracies which have every been discovered with the raw or source data and the code release for each version handles all of these. They are documented and handled not as fixes to a single weather station and/or single reading, but as a fix to a situation which is going to occur for multiple weather stations and/or readings. Further there will be an accompanying set of error data with each release of “best estimate” temperature data.
So we have a current list of situations handled by the current code. It is difficult to see how the subset of situations handled by previous versions of the code has any bearing at all on the best estimate temperature data or the error estimates for a current data set. Typically an old version would handle only a subset of the anomaly situations, or an inferior algorithm for a particular situation.
In other words once BEST (or whatever process you are using to validate a data set) had done comparisons with GISTEMP (or some other dataset) at a particular version and validated the processing, then there is no more to be said about any previous versions and no point in making comparisons between older versions pre-validation and newer versions post-validation.
Even in there is no validation process the presence or absence of differences between the current and past versions of a data set (particularly ones from 20 years ago) does not change the validity or error estimates for the current version, though they might indeed change the error estimates for the older versions.
Peter:
Thankyou for your post at July 6, 2014 at 7:24 am which attempts to address my objections to your assertions.
Firstly, you say to me
What I “wish to know” is not relevant. Scientific standards require publication of each and every change to each and every datum because the data has been changed and somebody who wants to use it may need to know the change; n.b. each and every change to each and every datum.
The effect of what you say is for you to be claiming that an assertion of “Trust me cos I’m a scientist” is an adequate replacement for a detailed exposition of each and every change to each datum. It is not an adequate replacement.
You make an extraordinary statement about the changes when you write
The documentation is inadequate if it does not apply to each change. Simply, an undergraduate would get a FAIL mark on an assignment in which she said her documentation concerning changes did not report the changes to each “single weather station and/or single reading”.
You conclude saying
The data changes most months and previous versions differ dramatically. As I said,
You now say to ignore those earlier versions, but those versions – which were used to justify political actions – were asserted to be more accurate than the changes made subsequently. Frankly, the GISS history of error estimation provides doubt concerning the validity of the present error estimates.
Richard
Richard said “Scientific standards require publication of each and every change to each and every datum because the data has been changed and somebody who wants to use it may need to know the change; n.b. each and every change to each and every datum.”
The computer run to produce a specific version of an adjusted dataset always starts with the raw data and not with a previous version of the dataset. Thus there is no scientific or statistical requirement to document changes by station from previous versions of the dataset.
The documentation required is the following :
1) Raw input data
2) Specification of the processing which will be done on the raw data.
This will typically use language describing common conditions. For instance, it might describe how to handle missing readings from stations which would normally report. This is a generic process – there is no requirement to list in the specification all the days missing for each station and new data may come to light at some point which fixes the problem in later runs.
3) The computer code which does the processing.
Someone wishing to validate the process can read this code to look for problems and may wish to amend it to make it easier to check how the anomalies relating to certain stations were handled if it does not already write out interim files after each adjustment step. The code should clearly handle all the situations documented in the specification correctly. Another method of checking the code would be to write an independent set of code from scratch (which has been done for GISS adjustment code) and then just compare the two sets of output which should then be substantially the same, though not necessarily absolutely identical. It is very unlikely that the same bug will be present in two independently written versions of code although they might handle the same condition in very slightly different ways.
4) The adjusted output data
All this stuff is available for GISS temperature data.
No-one in their right mind is going to write a spec which includes a detailed set of processing requirements for every individual station – it is a total and utter waste of time as the information can be extracted from a computer run if necessary. It is possible that some conditions are in the spec and apply to only a single station and are documented as such, but this is likely to be a very small number.
The data on precisely what changes are made to each station reading during the different processing steps within a specific (normally the current) version of code are available from the computer run, if they are needed.
Now this is a definitive and complete set of information to tell an interested party exactly how the data adjustments have been done in a particular version for each an every station if necessary.
You seem to be suggesting that such validation should include looking at all previous data set versions , but clearly from the above this is not necessary in the process of validation in order to meet the requirements of the users, the science and the statistics.
Peter:
You follow your fog of irrelevant detail in your post at July 7, 2014 at 4:28 pm with this statement
No, Peter. I am making two specific points which you are avoiding.
I copy those points from my most recent post to you at July 6, 2014 at 8:21 am although that post iterated them from earlier posts I put to you.
Point 1 repeatedly not addressed by ‘Peter’
i.e. basic scientific standards require that each and every change to each and every datum is fully described and recorded for each and every datum.
Point 2 repeatedly not addressed by ‘Peter’
i.e. Data now provided by GISS indicates that earlier data sets from GISS are so erroneous as to be worthless which renders worthless all analyses that used the earlier data sets and, therefore, the history of the changes GISS data suggests that all data from GISS is so erroneous as to be worthless: this suggestion is supported by the continuing process of regular changes which GISS continues to make to the data.
I hope the issues are now clear.
Richard
Richard,
It is fairly clear that your objections to the data adjustments in the GISTEMP data sets are mainly that you do not like the results rather, rather than being based on a clear understanding of precisely how and why temperature data is adjusted.
In the unlikely event you wish to learn about temperature data sets and how they are processed rather than just making evidence-free political comments then there is an organisation http://www.surfacetemperatures.org/ which is making available temperature data all the way from sources such as images of hardcopy to the finished adjusted data set using code supplied. It also welcomes involvement from outside the climatology community.
So the opportunity is there to find out for yourself if you wish to avail yourself of it.
For [b]Point 1[/b] Richard Courtney said “.. basic scientific standards require that each and every change to each and every datum is fully described and recorded for each and every datum.”
You are making up requirements that do not exist. Here is why.
I was responsible for the design of multiple government systems to track farm animals and their diseases in the UK. The first databases indeed had two time axes – one was the date of an event (e.g. date a cow was born and who its parents were) and the second was the time when the system was notified of that event (e.g. one week after the birth0 which had to be after the event itself, naturally. You could answer questions like “what was our view on 1st January 2005 of the count of cows and their split by breed.” The system had this capability on source data feed, reformatted and final query versions of the database.
However, in practice this led to a complex and expensive system to build and maintain and the capability was almost never used. The later systems retained the second time stamp on the source data feed only, eliminated the reformatted stage and the final query version became just the most up-to-date version. But should anyone want to go back to the view of a particular animal or farm on a specific date, then it was possible to do this because the information was available in the source data. A competent data analyst could either examine the source records and build a report from them, or the system team could rebuild a version of the database in another place as it would have been on a particular date allowing the standard query tools to be used against it.
You do not appear to distinguish between the raw data and the adjustment processing. The raw data for a particular [b]does not normally change[/b] between published versions of the adjusted data output. Perhaps occasionally someone might find an additional paper record for a station that was not available before but this is once in a blue moon.
The processing between versions is different – the new version processing is an incremental improvement on the old version. But the spec, code and adjusted output data is available for each version, and from that a competent person can analyse why a particular reading (e.g. global monthly average for June 2005) changed between versions. It would be a waste of expensive resource for the GISS team to go any further because you cannot really guess what a particular user of the data might want in the way of comparing versions. The user of the data can get what they want out of the data – the capability is there – but they have to be sufficiently computer skilled (which is pretty much a prerequisite for most of this stuff).
At an overview level it is pretty clear why the new version adjusted output differs from the old version output – it was because the old version did not handle documented anomaly [b]types[/b] X, Y and Z which are handled in the new version and documented as new in the new version spec.
As far as odd changes to the most recent version output data caused by some records coming in late, this would only affect the most recent few weeks of the adjusted output, and any competent user would know this. Generally the GISS event log documents this. The easy way out of it is to avoid using the most current data (maybe for a period of a few weeks) when doing an important analysis. Or just use it but expect changes – but if you do, don’t come back and moan when the data for the most recent few weeks does change.
There’s no “trust me I’m a scientist” involved in the comparison for a particular purpose of current with past versions, but certainly whoever does the comparison is going to have have to be a competent computer programmer. All the stuff required is there.
[Square html brackets do NOT work under the WordPress settings on this site. Use normal html coding angled brackets only. We recommend the “Test” page to verify your work. .mod]
Richard Courtney’s Point 2 was “… data now provided by GISS indicates that earlier data sets from GISS are so erroneous as to be worthless which renders worthless all analyses that used the earlier data sets and, therefore, the history of the changes GISS data suggests that all data from GISS is so erroneous as to be worthless: this suggestion is supported by the continuing process of regular changes which GISS continues to make to the data.”
(Aside – how do you highlight text here? BBcode tags don’t work. I’m trying raw HTML tags next!)
Richard, if you are talking about the fact that anything based on the most recent couple of weeks of data changes then this is just a fact of life as data feeds can go wrong and when they do it takes a short while to fix matters. The user of the data always has to be aware of this.
Clearly the adjusted data output also changes between versions. Assume the same model for anomalies found and corrected as you would for a system test of a new IT system, but extended over a period of years or decades rather than weeks or months.
In the early days you will easily find gross errors and will find them at a fast rate, plus some of the more subtle errors. However, at least for an IT system test, there are going to be a few subtle errors you could not find at this point as they are masked by other errors.
As time goes on then most of the gross errors will have been found, and the rate of finding errors will drop. If you plot total errors against time then it will be an asymptotic curve. As a project manager you can work out when you are likely to finish system test by seeing where you are on that curve. Note that you expect you will only find a proportion of errors in testing (hopefully a high proportion of them) – not all of them.
So where is GISS on the curve of finding and fixing anomalies in the raw data. The only examples you supplied were for 1980, 1987 and 2007 and to be honest the 1987 to 2007 comparison is pretty worthless because you need more points than these to define the curve, particularly more recent points. There were certainly significant changes visible between the three years.
Nick Stokes (a reputable climate data analyst) has said that the GISS team make very few changes to their processing methods nowadays and that little changes in the historical adjusted data. In the absence of any evidence to the contrary we might as well believe Nick. It is thus increasingly more difficult to find new anomalies to fix, that is indicative that the adjustment process copes with most of the anomalies present in the raw data and that the adjusted data output is mature and likely to be approaching its ultimate accuracy.
The fact that there are huge changes between, say 1987 and 2007 only tells you that 1987 was wrong – it says nothing about the state of the 2007 data, for which you would need to know the rate of changes going on around 2007.
A V2 to V3 comparison of data around 1980 (as in your three graphs) would give a good indication of the state of maturity of the GISS feed. In the absence of any such comparison Nick’s comments indicate that the quality of the GISS feed is now high. Comparisons with 1980 and 1987 are worthless in this regard.
Peter:
Your post at July 8, 2014 at 2:11 am says
That explains everything; i.e. you are an example of Sir Humphrey Appleby.
It is no wonder that all I have obtained from you is obfuscation and bloviation.
And the fact that you messed up the task of designing “multiple government systems to track farm animals and their diseases in the UK” is no reason for GISS to also abandon the scientific method.
Richard
Peter:
At July 10, 2014 at 12:51 pm you say to me
NO!
Anybody who reads the thread can see that is not true.
I have repeatedly asked for specific justifications using as many different forms of words as I could. You have replied with evasion and obfuscation.
I know the published details of what is done to “adjust” the data and said so when you first posted links to such procedures as one of your evasions. And I will not lower myself to share in “involvement” with it.
What is done is not at issue. At issue are
(a) the validity of what is done
and
(b) the effects of what is done.
I have been querying (a) and (b) and you have tried to discuss details of what is done as a method to avoid the discussion.
Richard