Opensource Temperature Data Monitoring Spreadsheet Updating Methodology

Walter Dnes – Click the pic to view at source

Image Credit: Walter Dnes

By Walter Dnes – Edited and Comment by Just The Facts and Werner Brozek

In my recent WUWT post I introduced an Opensource Temperature Data Monitoring Spreadsheet. The intent of this spreadsheet is to provide some of the key global temperature data sets, in a standardized data format summary, so that you can easily leverage these data sets for research. In this post I explain how to update the spreadsheet, and at the end of this post is comment from Just The Facts on the base periods used for each of the data sets spreadsheet.

This thread will have two parallel tracks, i.e.;

  • One for people who want to copy the online spreadsheet to their Google account. This will be referred to as “online”.
  • One for people who want to download the spreadsheet to their hard drives. This will be referred to as “local”.

To download the spreadsheet to your local computer:

  1. point your web browser to https://docs.google.com/spreadsheet/ccc?key=0AnTohu4oFUbcdEgzTkpEYTAwN1BiXzJXMXZ5RVJiOUE&usp=sharing
  2. select “File ==> Download as” from the spreadsheet menu (highlighted in blue in the image below), not the browser menu, (highlighted in red in the image below)
  3. choose the file type you want and download; .xlsx for excel; .ods for Openoffice/Libreoffice

dload

The graphs in Google spreadsheet don’t always translate well during the download process. You may have to rebuild them yourself. Here are the specs if necessary…

All graphs are XY graphs, and refer to data in tab “temp_data”

Column A is the X axis for all graphs. The Y data for slope graphs are as follows…

  • HadCRUT3 column I
  • HadCRUT4 column J
  • GISS column K
  • UAH column L
  • RSS column M
  • NOAA column N

For the 12 month running means, the header row is cells P11 through U11.

And the Y data is columns P through U from row 12 on down.

IMPORTANT NOTES FOR ONLINE VERSION USERS.

  • You will usually see 2 sets of menus, one from your web browser, and one from the online spreadsheet. Unless otherwise stated, all references to File/Edit/View/etc will be to the spreadsheet menu (highlighted in blue in the image above), not your web browser’s menu, (highlighted in red in the image above)
  • The numbers in column A are as they were originally imported from my PC. Do not attempt to reformat column A. Google sticks in commas in the year values when you reformat the column. Years with commas don’t look right. This is bad enough… *BUT IT ALSO CARRIES THROUGH TO ANY GRAPHS WHICH USE THE YEAR VALUE*! I’ve seen posts on the internet claiming that you can format a number as “plain text” and use it for mathematical operations, while hiding the commas. That may be true, but it has been my experience that “plain text” formatted numbers do not work with graphs in Google’s spreadsheet.
  • Just as important as being able to do stuff, is being able to undo stuff, especially when you do “the wrong type of oops programming”. There are 2 options in Google spreadsheets. First is the standard {CTRL-Z} key combo that undoes the previous step. You can undo several steps this way. If you want to go back further in time, go to the spreadsheet menu and click on “File ==> See revision history”. Note that this can take a minute or two to render on the screen. Don’t panic while waiting. Once you step back to an earlier version, I don’t know if you can revert to a later version.

Obtaining the temperature anomaly data sets.

The various data sets are updated at different times of the month. The following are approximations, and vary from month to month.

  • UAH and RSS around the 7th of the following month
  • GISS and NOAA/NCDC around the 15th of the following month
  • HadCRUT (v3 and v4) near the end of the following month

The data can be downloaded for free from the appropriate website. The source URLs for the data may change from time to time. As of the time of this post (early June 2013), they are…

The data are all text files. Due to filename extensions and/or mime-type declarations by the webmaster, some of the data sets may not be directly viewable with a web browser. You can still download them to disk and view with a plaintext editor.

Preprocessing into normalized/column format (GISS only)

All of the sites listed above, except GISS, download in column format, which is suitable for importing directly into a spreadsheet. The GISS download must first be reformatted into column format. That is easy with a bash script under linux or a Powershell script under Windows. The scripts will be covered later in this post. In addition, GISS data is integers, representing anomaly * 100. To get the actual monthly anomaly, divide by 100.

Overwriting the appropriate column in tab “temp_data”

The next step is to copy updated data to its column in tab “temp_data”.

IMPORTANT NOTES!!!

You can *NOT* get away with appending the latest month’s data to the columns in tab “temp_data”. HadCRUT data changes 6 months or so back, apprently late data coming in, and/or corrections. NOAA and GISS use a weird averaging algorithm for old missing data, which results in numbers changing all the way back to day 1 (i.e. January 1880) from month to month. To ensure a completely correct spreadsheet, you must overwrite the entire column with the numbers from this month’s download.

The imported data is *NOT* always copied starting in row 1. HadCRUT data (v3 and v4) is the oldest, going back to January 1850 (1850.083). That is arbitrarily assigned as row 1. NOAA and GISS start in January 1880 (1880.083), i.e. Row 361. UAH starts December 1978 (1979.000), ie row 1548. RSS starts in January 1979 (1979.083), i.e. row 1549. If you wish to add other data sets to your copy of the spreadsheet, be certain that the dates align properly.

The procedure is…

  • get the monthly temperature anomaly download
  • import it INTO A SEPARATE SPREADSHEET
  • select and copy the range of data cells in the separate spreadsheet, from the anomaly data column. Copy the range from the top row of the column of data to the bottom row of the column of data.

At this point, instructions diverge for local versus online users.

For local spreadsheet users
paste the copied range into “temps_raw”, such that the start of the paste range is the start of data in the column. That’s it for the monthly anomaly.

For online spreadsheet users

  • paste the copied range at the top of a separate page, or a separate
    spreadsheet. The important thing is that only 1 column of data is
    visible on the page.
  • export that page (with 1 column of data) as a CSV file. Remember the
    name of the file you exported to. Don’t worry if your spreadsheet
    complains that it’ll only save one page. That’s exactly what you want.
  • log in to your copy of the online Google spreadsheet.
  • go to row 1 in the right-most column of the online spreadsheet. This
    is cell V1 in the original version of the spreadsheet. It will change if you insert or delete columns (adding or removing data sets). Note that it is *VERY IMPORTANT* that you be in the rightmost column. The import process blanks 4 columns to the right of the one you’re importing. This includes wiping existing data if you import anywhere except at the far right… OUCH!
  • from the spreadsheet menu select “File ==> Import…”. This brings up an “Import file” dialogue.
  • click the “Replace data starting at selected cell” radio button.
  • click on the “Browse…” button, and select the CSV file that you exported a few steps ago.
  • this brings up a preview, and a “Separator character” option. You can ignore it, because you only have one column of data.
  • click on the “Import” button. When the import is finished, you should
    see data in column V
  • before copying over the data, the number of decimal places should agree with the destination column. The copy command copies the format, so you need to set the number of digits after the decimal. GISS uses 2 digits, NOAA uses 4, and the others use 3. If the number of decimal places displayed does not match…
    • click on the “V” at the top of column “V”. This will highlight the
      entire column.
    • click on “Format => Number” in the spreadsheet menu.
    • click on “Custom Decimals…” as show in the following image.
    • this will give you a dialogue box asking for the number of decimal
      places. Enter the number of decimal places you need.

format

  • Windows/Linux users hold down {CONTROL}{SHIFT} and press down-arrow. This will select the imported data. Any Apple Mac users know the equivalant combo on Mac?
  • press {CONTROL}{INSERT} to copy the data into the paste buffer. (Apple users?)
  • go to the start of the data set you are updating. They start in the
    following cells…

    • HadCRUT3 B1
    • HadCRUT4 C1
    • GISS D361
    • UAH E1548
    • RSS F1549
    • NOAA G361
  • press {SHIFT}{INSERT} (Apple users?)
  • check whether the data set pads the last few months of the series with fillers like “-999″. If so, delete those cells.
  • click on the V at the top of column V, and hit {DELETE}
  • on my web browser, I get 4 additional columns, W through Z. They
    should be deleted to reduce browser complexity. The process is…

    • click on the “W” at the top of column “W”
    • while holding down the {SHIFT} key click on the “Z” at the top of column “Z”
    • when you hover over the “Z” in column “Z” you will see a small triangle in the upper right-hand corner. Click on it and you get a drop-down menu. Select “Delete columns W – Z”, and the extra columns will disappear.
  • Common instructions for both local and online versions

    After the monthly temperatures have been updated, the slope and 12 month means need to be updated manually. The columns containing data sets are as follows…

    Data set Monthly anomaly Slope 12 month mean
    HadCRUT3 B I P
    HadCRUT4 C J Q
    GISS D K R
    UAH E L S
    RSS F M T
    NOAA G N U

    Here is what I did to update NOAA for April 2013 when the data came in.

    NOAA data for March extended down to G1959
    NOAA data for April extends down to G1960

    Slope

    For March, cell N1681 had the formula

    =slope(G1681:G$1959,$A1681:$A$1959)

    and the slope data extended down to cell N1958

    For April changed cell N1681 to =slope(G1681:G$1960,$A1681:$A$1960)

    formula

    • goto cell N1681
    • {CTRL}{INSERT}
    • goto cell N1682
    • extend select to N1959
    • {SHIFT}{INSERT}

    12 month running means

    • goto cell U1959
    • {CTRL}{INSERT}
    • goto cell U1960
    • {SHIFT}{INSERT}

    The affected graphs will update automatically.

    Converting GISS temperature downloads to column format

    The GISS monthly anomaly data is downloadable as a textfile, at URL http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt The textfile looks nice, but is not suited for importing into a spreadsheet. I run linux, so I use a bash script to convert it to columnar format, for use in a spreadsheet. I have also put together a Windows Powershell script, to do the same conversion under Windows. In both examples below, assume that you have downloaded the February 2013 data as giss201302.txt and that you wish to save the column-reformatted data to g201302.txt. Both posix and Windows users need to remember to divide the reformatted temperatures by 100, to convert from the unusual GISS style to degrees C.

    For posix (linux/unix/bsd/etc) users

    Windows users, skip to the “For Windows users” section

    The bash script is called “parsegiss” and is executed as follows

    parsegiss input_file > output_file
    e.g.
    parsegiss giss201302.txt > g201302.txt

    Here is the bash script. Copy the following to a text file, and chmod it
    executable. The script is 19 lines long.

    #!/bin/bash
    rm temp1.txt
    grep "^[12]" ${1} | sed "s/\*\*\*\*\*.*$/ /g" > temp0.txt
    decim=(000 083 167 250 333 417 500 583 667 750 833 917 000)
    wyear=`head -1 temp0.txt | cut -c1-4`
    while read xyear m[1] m[2] m[3] m[4] m[5] m[6] m[7] m[8] m[9] m[10] m[11] m[12] year_avg
    do
    if [[ ${xyear} -ge ${wyear} ]]; then
       xmonth=1
       while [[ xmonth -le 11 ]]
       do
          echo "${xyear}.${decim[${xmonth}]}, ${m[${xmonth}]}" >> temp1.txt
          xmonth=$(( ${xmonth} + 1 ))
       done
       wyear=$((${wyear} + 1))
       echo "${wyear}.${decim[${xmonth}]}, ${m[${xmonth}]}" >> temp1.txt
    fi
    done a.txt
    grep -v ", $" temp1.txt

    For Windows users

    Powershell comes built-in to Windows 8, and is available as a free
    download, from microsoft.com, for some earlier versions of Windows. This
    is the standard Windows scripting tool. It requires a bit of setup. I’ll
    deal with that here, to save some questions. Note, if you are using a
    work PC, or someone else’s PC, please get their permission before
    editing the user profile.

    • By default, Powershell comes up in restricted mode. I.e. it will
      execute only commands typed in at the command prompt. It will not
      execute Powershell scripts.
    • The minimum necessary permissions to execute your own scripts are
      obtained by typing the command…
      Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
      …at the Powershell prompt each time you start a Powershell session.
      Needless to say, this is annoying grunt work.
    • This command can be included in your user profile, and will be automatically executed from there, each time you open Powershell. To do this…
      • open a Powershell window, and type the command…
        notepad $profile
      • if it opens up, add the line…
        Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
        …to the profile, and save.
      • if you get an error message, about no such file, create an empty
        profile file. At the Powershell prompt type the command…
        New-Item -path $profile -type file -force
      • in the Powershell window, type the command…
        notepad $profile
    • Once you’ve set up the profile, close Powershell, and open it again, to have the profile take effect.

    Now that you have Powershell scripts executing, copy the script below to file “parsegiss.ps1″. Using the February 2013 data example, type the following command at the Powershell prompt

    parsegiss giss201302.txt | Out-File g201302.txt -encoding ascii

    This example assumes your encoding locale is ASCII. Change as necessary if you’re in a part of the world that uses a different encoding. Here is the script powershell.ps1, which is 14 lines long.

    findstr /B 1 $args[0] > a.txt
    findstr /B 2 $args[0] >> a.txt
    $linearray = $(type a.txt)
    for ($i=0; $i -lt $linearray.Count; $i++)
    {
       [double]$baseyear = $linearray[$i].Substring(0, 4)
       for ($j=1; $j -le 12; $j++)
       {
           $deciyear = $baseyear + $j / 12
           [string]$stringyear = "{0:####.000}" -f $deciyear
           $anomaly = $linearray[$i].Substring($($j * 5), 5)
           if($anomaly -ne "*****"){$stringyear + ", " + $anomaly}
       }
    }

    _ _ _

    Walter Dnes – Click the pic to view at source

    It is apparent from Walter’s graphs above that the temperature data sets use different base periods. “World Meteorological Organization (WMO) Policy” “suggests using the latest decade for the 30-year average”, i.e. a 1981–2010 base period. However, even the WMO doesn’t comply with it’s own policy. For reference, the base periods for the data sets in the Spreadsheet are as follows:

    UAH 1981-2010
    RSS 1979-1998
    HadCRUT 1961-1990
    NOAA 1901-2000
    GISS 1951-1980

    According to this WMO Management Report on Climate Normals one of the key arguments for using a “‘rolling’ set of 30 year Normals updated every 10 years (hereafter the 30/10 model)” is that;

    The use of more up-to-date Normals under the 30/10 model provides a more realistic base period for climate services. For instance, design standards and climatologies would be based on a more representative standard that better reflects possible changes in climate. By contrast, basing design standards etc on climate Normals that are up to 30 years out of date might raise significant credibility problems with the users of services and products (“can’t we get more recent data than this?”)

    Whereas in “the case against changing” to the 30/10 model and staying with a “30/30 model, i.e., a base period of 30 year normals, updated every 30
    years, i.e retain 1961-90 as the base period until 2021, when 1991-2020 will become the new base period” the key argument for staying with the 30/30 model is:

    Some NMHSs would face large increases in workload to revise products and services currently based on the 1961-90 period. Knowing that the base period will need to be updated in 2020 is a very different proposition to having to update within perhaps the next two to three years, and then (depending on what base period model is selected) having to face regular updates thereafter. This problem might be mitigated if, for instance, WMO provided software to regularly update normals (based on agreed common standards with regard to, e.g., missing data), which may be possible with CDMS. Alternatively, the normals could be calculated centrally by global agencies such as NCDC.

    One might wonder why GISS fails to use 30/10 or 30/30, and instead uses 1951-1980. According to James Hansen:

    “We calculate seasonal-mean temperature anomalies relative to average temperature in the base period 1951-1980. This is an appropriate base period because global temperature was relatively stable and still within the Holocene range to which humanity and other planetary life are adapted (note 1).”

    “In contrast, we infer that current global temperature is above the Holocene range, as evidenced by the fact that the ice sheets in both hemispheres are now rapidly shedding mass (Rignot et al., 2011) and sea level is rising (Nerem et al., 2006) at a rate (more than 3 mm/year or 3 m/millennium) that is much higher than the rate of sea level change during the past several millennia.”

    The key takeaway from this is to take care as some of the temperature data sets in the spreadsheet have intentionally arbitrary and knowingly outdated base periods.

    JTF

    About these ads
    This entry was posted in Lower Troposphere Temperature, measurement, Temperature and tagged , , , , . Bookmark the permalink.

    15 Responses to Opensource Temperature Data Monitoring Spreadsheet Updating Methodology

    1. Gary Pearse says:

      “NOAA and GISS use a weird averaging algorithm for old missing data, which results in numbers changing all the way back to day 1 (i.e. January 1880) from month to month. To ensure a completely correct spreadsheet, you must overwrite the entire column with the numbers from this month’s download.”

      And by 2080, 1880 will have been pushed down perhaps a degree or two and be well below any reasonable error bars on the actual old measurement . How can this stuff be called data? What use is it for other than supporting the CAGW meme? Hansen’s GISS deep-sixed the pesky 1936 temperature record in 1998 to proclaim a new record. I see that 1936 is now about 0.4C or so below 1998 in your table. Below are the adjustments MADE just in 2008 and these don’t include the stuff before and after that date.

      http://climateaudit.org/2010/12/26/nasa-giss-adjusting-the-adjustments/

      I think your effort is begging for another graph showing all these adjustments. Surely such a piece of work should have a graph showing the unadjusted raw data to be an open-source service to those researching climate. I one who likes his peanut butter without jam.

    2. Björn says:

      I use a Linux ( Mint 12+) setup on my main box, and use firefox for browsing ( usally the next to latest or latest update , I usually delay updating the the latest one for 3-4 weeks to allow some external addon plugins time to catch up ) . And there seem to something amiss if I try to download the Google spreadshhet to my local drive, I can not see any “download as” choice visible under the file menu, and all other choices are grayed ( could be some kind of a permission thing ). I managed to download the blasted thing though by firing up a virtual box WinXP instance, and open and dowlnload in an ancient version windows version of firefox , even though the google spreadsheet complained about the browser’s version being a graytop , so I guess I can live with it that if I have to, but there is one more choice of a file type version I would like to see in the downloadable list, I do not normally use windows, so I usually frown upon the exel filetypes specially the .xlsx one , and for handling time series and things statistics, the Libre office calc toy is really unusable as it slows down to a snails pace if you create more than 2 or 3 graph sheets , therefore Gnumeric has become my choice tool for this kind of work, and I it really outshines both execel and LO-Calc in this respect, with its built in versatility in this kind of works and its outstanding graph capabilities , so how about offering a “temperatures.gnumeric” file ( which is in reality a very versatile XML-description of it’s content ), also in the download choices, I think LO-Calc has a either a built in knowlegde of it or an external plugin addon for saving the spreadsheet in a .gnumeric format, and would not be surprised if the same is true for excel.

    3. walterdnes says:

      Björn says:
      > June 9, 2013 at 1:28 pm

      > I use a Linux ( Mint 12+) setup on my main box, and use firefox for browsing

      I run Gentoo linux with Firefox ESR 17.0.5

      > And there seem to something amiss if I try to download the Google spreadshhet
      > to my local drive, I can not see any “download as” choice visible under the file
      > menu, and all other choices are grayed ( could be some kind of a permission thing ).

      Make sure you’re using the “File” menu from the Google spreadsheet, not the “File” menu from Firefox. See the 2nd image in this post. It can get confusing.

      > and for handling time series and things statistics, the Libre office calc toy is
      > really unusable as it slows down to a snails pace if you create more than 2 or 3
      > graph sheets , therefore Gnumeric has become my choice tool for this kind of
      > work, and I it really outshines both execel and LO-Calc in this respect, with its
      > built in versatility in this kind of works and its outstanding graph capabilities ,
      > so how about offering a “temperatures.gnumeric” file

      I also use Gnumeric at home. OO has problems, to say the least. I had originally intended to export Gnumeric to Excel for upload to WUWT. Let’s just say that the translation did not work. Even worse, I…
      * exported Gnumeric to Excel
      * went to the local library and imported into Excel (Office 2003)
      * rebuilt the graphs
      * emailed the Excel 2003 spreadsheet to JTF
      * even that had major problems importing into the latest Excel!!!

      If you want, try my home version (Gnumeric) with lots of graphs (sized for a 1920×1080 monitor) and no documentation, at http://www.waltdnes.org/misc/temperatures.zip you can either wget it or download via Firefox.

    4. walterdnes says:

      Gary Pearse says:
      June 9, 2013 at 1:17 pm

      > I think your effort is begging for another graph showing all
      > these adjustments. Surely such a piece of work should have
      > a graph showing the unadjusted raw data to be an open-source
      > service to those researching climate. I one who likes his peanut
      > butter without jam.

      There have already been a lot of articles, some of which you’ve noted. What we really need is to go back to the earliest GISS data. Does anyone know when Hansen published his first GISS data? I assume it would’ve been in paper form, before the days of the web.

    5. Björn says:

      Thank you for the gnumeric link Walter!!!

    6. Gary Pearse says in part on June 9, 2013 at 1:17 pm:

      “And by 2080, 1880 will have been pushed down perhaps a degree or two and be well below any reasonable error bars on the actual old measurement . How can this stuff be called data? What use is it for other than supporting the CAGW meme? Hansen’s GISS deep-sixed the pesky 1936 temperature record in 1998 to proclaim a new record. I see that 1936 is now about 0.4C or so below 1998 in your table. Below are the adjustments MADE just in 2008 and these don’t include the stuff before and after that date.

      http://climateaudit.org/2010/12/26/nasa-giss-adjusting-the-adjustments/

      The second link is for USA temperatures, not global. The 1936 peak is a USA peak, not a global one. I don’t think adjusting USA temperatures is the biggest problem with GISS. I have a bigger beef with significant ocean areas represented by land temperatures.

    7. Lance Wallace says:

      Thanks for the reference to the different base periods used. It’s interesting to plot the temperatures by putting everything on the same base period. I chose 1981-2010 (same as UAH and agreeing with the WMO recommendation).

      The resulting graph is here.

      One can also see a bit of internal variability, particularly marked in the two satellite datasets. The slopes for the 30-year period agree, with GISS and HADCRUT4 slightly high at 1.8 degrees C per century, RSS and HADCRUT3 low around 1.6, and UAH and NOAA middling at 1.7.

      One can also see a bit of internal variability, particularly marked in the two satellite datasets. These had R^2 values on the order of 40%, compared to >60% for the other four datasets. I’m not certain how to interpret that. Possibly higher measurement error in the satellite devices?

    8. Lance Wallace says:

      Arggh–sorry for duplicating a sentence just now.

    9. Gary Pearse says:

      Donald L. Klipstein says:
      June 9, 2013 at 2:40 pm

      The 1936 peak was for USA. Yes you are correct as it concerns the GISS, however Canada also had the same thing and the all time hottest temperature in Saskatchewan (the Prairies get the summer heat) still stands (we Canucks can’t afford the hordes of government climate scientists found in the US):

      http://www.ec.gc.ca/meteo-weather/default.asp?lang=En&n=6A4A3AC5-1

      “The Deadliest Heat Wave in History – July 5-17, 1936. Temperatures exceeding 44°C in Manitoba and Ontario claimed 1,180 Canadians (mostly the elderly and infants) during the longest, deadliest heat wave on record. Four hundred of these deaths were caused by people who drowned seeking refuge from the heat. In fact, the heat was so intense that steel rail lines and bridge girders twisted, sidewalks buckled, crops wilted and fruit baked on trees.
      Hottest Day on Record – July 5, 1937. The highest temperature ever recorded in Canada was reached at Midale and Yellowgrass, Saskatchewan when the mercury soared to 45°C.”

      a look at Greenland (this study is a fisheries study so I trust it, scroll to page 6):

      http://journal.nafo.int/39/stein/1-stein.pdf

      Central England Temp (CET)

      http://www.climate4you.com/CentralEnglandTemperatureSince1659.htm

      Note the 30s are higher than 1998, too.

    10. Björn says:

      Hi again Walter, a couple of additional comments, regarding my first comment, plus a little more.
      In the first run only read the second part of your answear to my comment , and I was using the correct file menu (the google docs one ) when I first tried to directly download the spreadsheet from my linux desktop. And the firefox version number is 19.02, so it is possible there are some bugs cocu in the newer version , when the “download as” item did not show up under the goggle docs files , and as I told you it did show up in an older firefox version under a WinXp instance virtual run, and I downloaded the .ods version that way, LO-Calc took ages to open it, and none of the graphs showed up, the graph sheets came up but all with an empty white background. Then after you posted the link to your gnumeric file I downloaded that one and did few experiments with it and here are my results : I saved two new instances of it from gnumeric ( the version number of the one I use now is 1.10.17) one instance as an MS Execel 97/2000/Xp .xls type , and another one as a ODF type with foreign elements (.ods), Gnumeric read both of them back without trouble and they were identical to the orginal .gnumeric file I downloaded from your link. Then I tried to read them both into libreOffice-Calc ( Version 3.4) , calc took ages to read the .ods file and came up with all 10 sheets loaded and the tempdata sheet open and apparently correct, but froze solid as soon as i tried move betweeen sheets, so I had to open a terminal and kill it manually to, I also noticed on comparision and inspection that the reason I saw no graph in the temperature.ods file I had managed to downloaded previously from within the WinXp-virtual machine. seems to be that there was only one sheet (named temp_data, with only 3 or 4 first colum containing any data ) where there were two sheets ( temp_raw , temp_final , with a lot more data in it ) present in the .gnumeric instance, and that the graph_sheets were all referencing emtpy column or noexisting data for the graphs.
      I also went into the virtual WinXp machine again copied all l 3 files to it’s workspace and the tried to read them inside it , with both a relatively recent exel viewer instance ( I do not have a win office installed there ) that understands at least the 2007 version of excel file types ( and I think 2010 types also , but 100% sure), it read the Gnumeric created .xls files fine and presented it and all graphs correctly, except in one detail it placed the horisonal axis ( x-axis ) in the middle of the first graph only (on the zero line in the temps_graph ) instead of below the chart area as iin the original ( it may be due to some default setting somwhere in the exel-viewer that ) all the rest was fine, so my guess is yhat a file saved from gnumeric in this type of exel format migth come through in the newer versions even if it bungles when trying to save in later types of exel formats, the exel viewer bungled both .ods instances, but I expected that, I think the ODF document format is foreign to it, or it perhaps needs to have an additional converter installed that I have not put up. I also tried to read all three files into both a Windows version of gnumeric that is installed in the virtual Win engine, and a libre office instance ( ver 3.somthing ) that is also there ,gnumeric came through in all three cases with flying colors loaded the files quite fast, and all three copied instances were identical to the orginals.The winXp instance of Libre Office behaved the same as under Linux, took a a good deal of time loading the first (oldest .ods file) showed it with a tab for each graph_sheet but an empty graph frame under each one, and a single temp_data as a first sheet an no data in the columns or a missing sheet for the the graph sheets references, and a froze stiff on the other two.There must a be some big cock up in some the graph creation / handling section of LO-calc ,( as I said before put up more than 2 -3 graphs and it quits moving ) beacause as a final test i fired up an pre-libreOffice version of openoffice (OO verson 2.4 ) and to my surprice it opend all three files , and did almost as well as Gnumeric, a tiny bit slower but showed all the a content and presentation identical to the orginals. And finally I also had the newest OO-office instance from the Aphace-crowd on hand and it read all files just as fine and had no problems whith the graphs, or slow/noexistent navigation response, etc.

    11. walterdnes says:

      Björn says:
      June 9, 2013 at 6:23 pm

      > Hi again Walter, a couple of additional comments,
      > regarding my first comment, plus a little more.

      Thanks. I’ll try Openoffice. There is a bin package for Gentoo, but nothing from a source tarball. I’ll play around with it and see if I can get it to work with the graphs.

    12. walterdnes says:

      Given the problems translating between different spreadsheets, would people prefer a CSV text file with just the various monthly data sets, downloadable from WUWT? It would be up to people to do their own graphing/analysis/whatever. The CSV file would be updated as data comes in.

    13. Lance Wallace says:

      I like the idea of making all six datasets available and updated on WUWT. That is really helpful. I also had no problems with downloading your Excel file, but I really only used the first six columns together with the information on the baseline periods for each dataset.

      One suggestion–can you add BEST? This is definitely independent since it has 3 times as many stations, although as our host would point out, all of them lacking adequate metadata.

      Recently there was a discussion somewhere of the middle troposphere temperature time series. Possibly supplying the RSS and UAH data for the MT would be of interest. In fact, I haven’t been able to find the UAH data for the MT, so just a link would help me.

    14. walterdnes says:

      Lance Wallace says:
      June 10, 2013 at 1:39 am

      > Recently there was a discussion somewhere of the middle troposphere
      > temperature time series. Possibly supplying the RSS and UAH data
      > for the MT would be of interest. In fact, I haven’t been able to find the
      > UAH data for the MT, so just a link would help me.

      There’s a whole bunch of data at http://ghrc.nsstc.nasa.gov/hydro/search.pl?method=quick&keyword=AMSU&context=Any+field

      Click the green arrow to download. You may be more familiar with the older interface at http://ghrc.nsstc.nasa.gov/amsutemps/ which has an option to download data after plotting.

    15. AFPhys says:

      CSV is always a good choice for raw spreadsheet data. Please supply a link. Thanks.

    Comments are closed.