WUWT Video – Zeke Hausfather explains the new BEST surface data set at AGU 2013

I mentioned the poster earlier here. Now I have the video interview and the poster in high detail. See below.

Here is the poster:

BEST_Eposter2013

And in PDF form here, where you can read everything in great detail:

AGU 2013 Poster ZH

 

About these ads

122 thoughts on “WUWT Video – Zeke Hausfather explains the new BEST surface data set at AGU 2013

  1. Obviously, Zeke, Mosher and Robert are smart guys with the best intentions. Its just that we cannot be sure that this methodology is producing true results.

    Take 20,000? stations and splice them up into 160,000 (now shorter) records and there is much higher trend than the original raw records show.

    Okay, that tells you the methodology produces a higher trend. How does one double-check 160,000 spliced-up records? Why does the methodology produce a higher trend? What is the individual trend-raising rates for each particular component of the methodology. Why is there no warm 1930s period for the majority of the US in the numbers. When a record was set on July 8, 1936 in hometown Kansas, why does that not show up in the new reconstruction.

    I think best intentions does not mean true. We need all the data and various other diagnostics to double-check the results.

  2. At no time was allowance for UHI mentioned. Despite this the resultant graphs were basically exactly what could be expected from delta temp in order to ake northern climes habitable.

  3. Anthony,
    I would like to see the poster Numbered 996 right of the BEST Poster. It is obviously about the global Warming hiatus. Do You know where to find it?

    REPLY: I have coverage of that one planned – Anthony

  4. I always think in terms of “my backyard.”

    My backyard has a thermometer and a large garden and there are definitive guidelines of when things can be planted. The dates are exactly the same as they have always been for 100 years.

    One can tempt fate and try a little earlier. Nope, Mama Nature gives you a spanking.

    This year, the snow melted out at its latest date ever, going back 100 years. But, planting times remained exactly the same. The solar angle and the tilt of the Earth warmed the ground up at exactly the same time as it has always done.

    If it were truly 1.5C warmer, those dates would have changed by now and the snow would not have left at its latest date on record. My backyard tells the truth.

  5. Can’t wait till the “science is settled”, then we can argue about the, satellite/ argo/balloons/surface stations quantification of the “problem”.
    The windmills as-built must already be having an effect.

    Can’t figure out why, these cost-effective solutions to saving the planet (and our grandchildren), are not being shouted from the rooftops ?

    It can’t be due to a lack of transparency, we’ve been reassured.
    So there is that.

  6. @Zeke Hausfather at 12/13 6:32 pm in Dec. 12 post
    Stephen Rasey: the march of the thermometers meme was so 2010. Berkeley uses ~40,000 stations, and 2012 has more station date than any prior year (it increases pretty monotonically. GHCN-M version 3 (which NCDC/NASA now use) also has much more station data post-1992 than the prior version 2.

    Stephen Rasey at 12/15 12:49 pm
    @Zeke Hausfather at Dec 13, 6:32 pm

    …..
    Let’s take a look at BEST results for Iceland.
    You say that BEST has over 40,000 stations. The page lists 40,747.

    Dumb question #1. Are these 40,747 “stations”
    A) separate thermometer locations with potentially discontinuous records, before the application of the BEST scalpel? or
    B) virtual stations CREATED by taking a slice from far fewer locations. For example they are from 8000 geographic locations with an average of 4 scalpel-slice “breakpoints” in each location records.
    C) Neither (A) nor (B).

    Under (B), you get more “stations” by making more breakpoints. The claim that you have more stations implies you have more coverage and better data. But if stations are created by a new breakpoints, more stations hints at worse data, [shorter average record lengths,] greater uncertainty and more loss of low frequency information.

    So what is closer to the truth?
    (A) where you have 40,747 station thermometer records you slice into 200,000 segments or
    (B) you have fewer than 10,000 thermometer locations you slice into 40,747 “stations.”

    This question has been out on a prior post there unanswered for 3 days.
    Let’s see what GHCN-M Version 3.2.0 says:

    GHCN-M Version 3.2.0 contains monthly climate data from weather stations worldwide. Monthly mean temperature data are available for 7,280 stations, with homogeneity-adjusted data available for a subset (5,206 mean temperature stations). Data were obtained from many types of stations. For the global component of this indicator, the GHCN land-based data were merged with an additional set of long-term sea surface temperature data; this merged product is called the extended reconstructed sea surface temperature (ERSST) data set, Version #3b (Smith et al., 2008).
    Source: epa.gov

    If the “march of the thermometers”, i.e. the Great Dying of Thermometers “is so 2010″ as you say in ridicule,
    you must be implying (A);
    40,000 weather stations split into about 200,000 segments via BEST breakpoints.

    But it appears from GHCN-M to be (B).
    About 7,000 weather stations that you slice with break points into 40,747 station record segments.
    So what is it Zeke? A or B?

    Would 97% of general scientists and engineers agree with your assertion that BEST has “over 40,000 stations” and not 40,000 station record segments or station fragments from about 7,000 weather stations?

  7. Andres Valencia says: December 18, 2013 at 4:24 pm

    There is too much confusion said the joker to the thief!
    Bob Dylan – All Along The Watchtower

    Apparently Dylan loved Hendrix’s version of the song: Quoted (somewhere), “I didn’t know it at the time, but I wrote that song for Jimmy”.

  8. I thought the argument was the the MWP, LIA & RWP were localized events and were therefore dismissed.

    Yet, this study seems to focus only on the US weather stations. Am I missing something?

    How can you have it both ways (logically)?

  9. …..as long as we’re quoting songs, when I look at BEST I think of David Lee Roth in ‘Panama’ …….”We’re running a little bit hot tonight”….

  10. Cannot speak to their methods or results, but kudos to Zeke for giving Anthony crisp, coherent answers on topic that a lay person could at least follow.

  11. Zeke Hausfather leaves me a little bit cold (no pun intended), midway through the interview, with his explanation for the absence of error bars. Sure, one can look it up, as he says, but shouldn’t the error range be shown in the presentation? A shaded range would not have made the graphs confusing. He seems bright and articulate (and I apologize for insulting statements I made in a comment a few days ago) but is it possible he’s being a little disingenuous about the confidence in those graphs?

  12. That’s odd… if you want to talk songs, climate stuff always reminds me of Mike and the Mechanics, “Taken In”…

  13. Stephen Rasey says: December 18, 2013 at 4:46 pm
    Would 97% of general scientists and engineers agree with your assertion that BEST has “over 40,000 stations” and not 40,000 station record segments or station fragments from about 7,000 weather stations?
    +++++++++++++++++++++++
    deserves an answere

  14. Mosher has declared innumerable times on the internet that according to basic physics, adding C02 to the atmosphere makes the atmosphere warmer. Keeping in mind these declarations, who thinks he will then produce any analysis that shows otherwise?

    Andrew

  15. Dylan also later rerecorded All Along the Watchtower in a style more similar to Jimi.

    Another similar pairing – after Mitch Ryder and the Detroit Wheels recorded Rock & Roll Lou Reed said that their version was much better than his and only after hearing their version did he understand how the song was supposed to be played.

  16. Interesting talk.
    If the original data came up with divergent trends of 0.5 a degree on regional basis, from inputs with a +/- 1 degree error range, why would anyone be surprised?
    The trend is not significant.Otherwise known as noise.
    The BEST methodology diced and sliced this data into fragments, and reanalysed to to proclaim a trend of … ??Warming of significance.
    Yet Stephen Mosher tells us on the previous(Poster) post, the data is crap….
    Sorry but as any farmer knows, if you slice and dice crap, mix it up and spread it around, its called manure.

  17. Bill:

    Obviously, Zeke, Mosher and Robert are smart guys with the best intentions. Its just that we cannot be sure that this methodology is producing true results.
    ################################################################

    Well, in fact you can. We produce a FIELD that is a prediction for the temperature at any location in the US. All you have to do is find a station that we dont use in the estimation of the field and compare it with the prediction of the field. You can go to the oklahoma Mesonet stations. They are better sited than CRN. You can pay thousands of dollars for that data and
    check the prediction. These stations were not used in estimating the field. We also can product the field and hold out stations. Then use the field to predict what we expect to find there.
    This in fact was what we did with the first paper, using only a few thousands sites.
    Finally, we know that the method is BLUE.

    ###########################################

    Take 20,000? stations and splice them up into 160,000 (now shorter) records and there is much higher trend than the original raw records show.

    ###################

    Wrong. When you have a station that moves from being located at 1000 feet ASL
    to 0 feet ASL you have 2 stations. treating these two stations as one is wrong.
    Further adjusting this station is also filled with error. So we dont stitch two
    different stations together ( AS GISS would do ) and we dont do a SHAP adjustment
    as NOAA and GISS and CRU would do. we treat the stations as two stations
    BECAUSE they are two stations. The same if the time of observation is changed.
    We dont adjust. we treat them as two stations because when you change the observation
    time you have created a new stations. And when the station shelter changes, we dont
    pretend this is the same station. Its not. Its a new station and new shelter. We dont try to
    adjust this. we say what skeptics like willis said: these are diffrent stations

    Finally, we conduct a double blind test to prove out the method.

    #############

    I think best intentions does not mean true. We need all the data and various other diagnostics to double-check the results.

    The code and data has been us for years. SVN access with the password is posted.
    I note that you havent even looked at it

  18. Tom J says:
    December 18, 2013 at 5:32 pm
    Zeke Hausfather leaves me a little bit cold (no pun intended), midway through the interview, with his explanation for the absence of error bars. Sure, one can look it up, as he says, but shouldn’t the error range be shown in the presentation?

    ####################

    Not really. Posters are work in progress, basically showng people in an informal way what work you are doing. When the dataset gets published then of course you would add whatever detail the reviewers wanted.

    one thing zeke left out is this.
    1. we used GISS at 250km interpolation. They dont publish error bars, so we cant plot
    what doesnt exist
    2. Merra does not publish uncertainty, so we cant plot what doesnt exist
    3. narr doesnt
    4. UAH doesnt
    5. Prism doesnt
    6. NCDC doesnt
    7. RSS doesnt

    So basically we cant plot what doesnt exist for the other guys

    in short. all the estimates for CONUS lie on top of each other when you just look at the temporal dimension. thats not what is interesting. Let me repeat that. The point of the dataset is not
    to compare the overall trend of all these datasets. The point is to examine the spatial differences. Sure the overal trend matches but what does it look like if you get higher spatial resolution. We have stations every 20km… why average that data into 5 degree grid cells like CRU does or GISS?

  19. Stephan rasey.

    1. There are 40000 separate stations in the entire database.
    2. there are 20K stations in this CONUS database

    GHCN Monthly is rarely used. We use GHCN DAILY. daily data is not adjusted.

    As for splitting. It depends on on the time period.

  20. @Mosher:
    (On tablet so a bit terse…) The problems I see in your approach are two fold. First, the use of averages of intensive properties is philosophically broken (devoid of meaning. Link in prior posting) and this cannot be avoided. Second: splicing induces splice artifacts. You splice more (nicely, and indrectly, after dicing…) so will have more splice effect, not less. These are both endemic and real problems. You find trend not in individual long lived stations. Something is amiss.

  21. @Mosher at 7:29 pm
    When you have a station that moves from being located at 1000 feet ASL to 0 feet ASL you have 2 stations.
    Granted. If you have the metadata to support it.

    How many breakpoints are in the data base?
    For how many breakpoints do you have verified metadata that supports ending one station and starting a new one.

    BEST does it backwards. It doesn’t have the metadata. So it looks for likely breakpoints in the data and regional kriging without knowing the physical cause.
    What if the breakpoint shift is really from a recalibration?

    Finally, we conduct a double blind test to prove out the method.
    Link please.
    I remember reading one paper from a year ago about a blind test, but that was on the kriging process on unsliced, synthetic data.

    Rasey Jan 21, 2013
    The Rohde 2013 paper uses synthetic error free data. The scalpel is not mentioned.

    http://berkeleyearth.org/pdf/robert-rohde-memo.pdf

  22. Here is the USA data using GHCN-D data that has not been homogenized. Clearly warming is cyclical and not driven by CO2 as the current peaks are similar to the 30s and 40s. http://landscapesandcycles.net/image/75158484_scaled_590x444.png

    The question with any re-analysis or any other method data assimilation method, is how do they treat “change points”. As I documented in Unwarranted Temperature Adjustments: Conspiracy or Ignorance? http://landscapesandcycles.net/why-unwarranted-temperature-adjustments-.html , many data sets are adjusted because natural change points due to the Pacific Decadal Oscillation and other natural cycles are misinterpreted. The homogenization process creates new trends and, threats those natural change points as “undocumented station changes”. Whether it is the GISS datasets or Berkeley those ” change point” adjustments need to be critically analyzed.

  23. @E.M.Smith at 7:48 pm
    Not to mention that BEST is proud of being able to work with short segments and use the least square slopes.

    Never mind that as the length of the segment gets shorter, the slopes will get larger, either positive or negative, just look at the denominator of the slope function. The uncertainty in the slope goes through the roof. How do they use slope uncertainty in their kriging?

    If you have 40 years of a record, what is the longest duration climate signal you might tease out? Fourier say only 40 years, but with a principle frequency analysis you might tease out an 80 year hint assuming with a half cycle contains a significant amount of power. Now, you take the scalpel and cut it somewhere in half with an unspecified breakpoint shift. You want to tell me that you can still estimate an 80 year cycle from two uncorrelated 20 year signals? You’d be lucky to get back to the 40 year signal you left on the cutting room floor.

    Short segments mean short wavelengths; loss of long term trend. High frequency content cannot predict low frequency content. I don’t care how many thermometers you krig. And kriging with thermometers 500 to 2000 km away from the station doesn’t instill confidence in any of it. (See Iceland)

  24. This is a perfect opportunity for WUWT-TV….. Anthony?

    Just sayin, How many emails can one avoid with a short discussion “Live” now days?

  25. Steven Mosher says:
    ***When you have a station that moves from being located at 1000 feet ASL
    to 0 feet ASL you have 2 stations. treating these two stations as one is wrong.
    Further adjusting this station is also filled with error.**
    However you now do not have history. So you cannot say you have a 100 year record, for example. Continuous records show trends or the non-existence of trends until they are “adjusted”.

  26. From the poster …

    “Stations outside U.S. boarders are also used in detecting inhomogenities and in the construction of temperature fields”

    First – does spelling count? Anyone care to join Climatologists Without Boarders?

    Second – rapid drop off in densities over Mexican border vs higher densities over Canadian border. Can one expect the same, high resolution accuracy for San Diego / San Ysidro as, say, Detroit?

  27. Here are a couple of links bake to Berkley Earth Finally Makes Peer Review…” Jan 19, 2013.
    This is for reference back to previous discussions on the same topic of the scalpel and krigging

    Phil 1/21/13 8:36 pm
    Restates Fourier argument that the scalpel loses long wavelengths.
    How is BEST not effectively a sort of statistical homeopathy? Homeopathy, as I understand it, is basically taking a supposedly active ingredient and diluting it multiple times until almost none of it is left and then marketing it as a cure for various ailments.

    Willis 1/22/13 12:41 am
    Rebuttal of Phil.
    As long as there is some overlap between the fragments, we can reconstruct the original signal exactly, 100% correctly.

    Phil 1/22/13 3:04am, reply to Willis 12:41 am
    First, the problem is that there has to be some credible basis on which to reconstruct the original signal, such as the overlap you mention. The problem is that, as I understand the scalpel (and I may not understand it correctly), by definition there isn’t going to be an overlap between fragments of the record of a given station
    “Is it the data talking or the seemstress?”

    Willis 1/22/13 10:32am reply to Phil 3:04 am
    However, you seem to think that we are trying to reconstruct an individual station. We’re not. We’re looking for larger averages … and those larger averages perforce contain the overlaps we need. However, the scalpel doesn’t use “neighboring stations” to put them back together. Instead, it uses krigging to reconstruct the original temperature field.

    …Look, Phil, the scalpel method has problems, like every other method you might use. But that doesn’t make it inferior to the others as you seem to [think].

    I post. Willis comments. Then I work on a Band-Pass signal and seismic inversion argument.

    Rasey 1/22/13 12:05 pm
    BEST, through the use of the scalpel, shorter record lengths, and homogenization and krigging is honoring the fitted slope of the segments, the relative changes, more than the actual temperatures. By doing that, BEST is turning Low-Pass Temperature records into Band-Pass relative temperature segments.

    With Band-Pass signals, you necessarily get instrument drift over time without some other data to provide the low frequency control. I suspect BEST has instrument drift as a direct consequence of throwing away the low frequencies and giving low priority to actual temperatures.

    I then gave an illustration of seismic inversion.
    … It is possible to integrate the reflectivity profile to get an Impedance profile, but because the original signal is band-limited, there is great drift [over time], accumulating error, in that integrated profile. The seismic industry gets around that drift problem by superimposing a separate low frequency, low resolution information source, the stacking or migration velocity profile estimated in the course of removing Source-Receiver Offset differences and migrating events into place. …

    In a similar vein, BEST integrates scalpeled band-pass short term temperature difference profiles, to estimate total temperature differences over a time-span. Unless BEST has a separate source to provide low-frequency data to control drift, then BEST’s integrated temperature profile will contain drift indistinguishable from a climate signal.

    If BEST has a separate source to provide low-frequency control I still don’t know what it would be that that haven’t already minced.

  28. [snip - enough of this name calling/bashing of people you don't agree with. You don't even have the courage to put your own name to your own words while bashing somebody who does and who has done some actual work. My advice - cool it or I'll do it for you - Anthony]

  29. It does not matter who explains it and how many times it is repeated. Anyone with a modicum of integrity has to know that this is a bullshit methodology that has repeatedly given an inflated result.

    Anyone who persists in arguing otherwise has an agenda, personal or that of an employer, which requires them to do so.

    Stand up and be men. Just say we don’t know because there are not enough data. So your averaged best guess is as good as mine.

  30. @Steven Mosher at 7:42 pm
    1. There are 40000 separate stations in the entire database.
    2. there are 20K stations in this CONUS database

    GHCN Monthly is rarely used. We use GHCN DAILY. daily data is not adjusted.

    GHCN-Daily now contains records from over 75000 stations in 180 countries and territories. Numerous daily variables are provided, including maximum and minimum temperature, total daily precipitation, snowfall, and snow depth; however, about two thirds of the stations report precipitation only. Both the record length and period of record vary by station and cover intervals ranging from less than year to more than 175 years. (Source: ncdc.noaa.gov)

    A third of 75,000 stations is a lot less than 40,000.

    GHCN-M Version 3.2.0 contains monthly climate data from weather stations worldwide. Monthly mean temperature data are available for 7,280 stations, with homogeneity-adjusted data available for a subset (5,206 mean temperature stations)

    Mosher, your 40,000 number doesn’t make sense if “about two-thirds” of 75000 are precipitation only. You are also asking me to believe that GHCN Monthly has significantly fewer stations than GHCN Daily temperature stations. I need to see a link on that.

    I would also very much like to see a distribution of the length of station records in the database and a distribution of the lenght of records after they have been through the scalpel.

  31. Stephen Rasey says :

    I would also very much like to see a distribution of the length of station records in the database and a distribution of the lenght of records after they have been through the scalpel.

    I would also like to see this. but for now I’ll settle for the more likely scenario of faerie folk farting rainbows and world peace. ( the world peace being a direct result of the atmospheric increase in gaseous rainbows. I was not being greedy by asking for two separate items )

  32. At around 2 minutes 50 seconds into the video. Zeke says that their results were much more consistent with reanalysis products… This gives them confidence they did it right evidently. What are the reanalysis products? And how are they right?

  33. reposted so I can follow the topic:
    Mario Lento says:
    December 18, 2013 at 10:04 pm
    At around 2 minutes 50 seconds into the video. Zeke says that their results were much more consistent with reanalysis products… This gives them confidence they did it right evidently. What are the reanalysis products? And how are they right?

  34. in addition to my previous comments: I know full well that Anthony has no truck with folk who criticise whilst posting ‘anonymously’. i’ve made it clear before that I post as I do from habit rather than a wish to hide. My name is Craig Frier should anyone ever take issue with a comment of mine. I’m more than happy to provide an address should anyone wish to berate me in person.

  35. @Reg Nelson,

    “Yet, this study seems to focus only on the US weather stations. Am I missing something?”

    I noticed this earlier. It looks as though Global Warming affects only Americans, and the other 95% of us don’t have to worry

  36. Yes @RioHa and @Reg Nelson. Whilst the Continental US has to deal with the despicable affliction global warming the rest of us, the likes of myself in the North West of the United Kingdom waking up to no power after a windy night for example, have to deal with weather because it’s not part of the program to include those of us moving back to Arctic conditions.

  37. Why did they start at 1850? Is it because it was the best year to start the record due to temperature measuring sites, quality of data, or because that was the end of the Little Ice Age? On that note I have talked with some old timers in my area (east Texas) and they told me that when their ancestors settled our area in the early 1800s that the ecosystem was mainly grass plains and some savanna, but now our entire area is heavily forested (except where cleared by man). These forests grew since the end of the LIA, which means there was a significant climate change between then and now with it being both warmer and wetter now. So, why was the year 1850 chosen when that is precisely when North America started recovering from the LIA?

  38. This is all a waste of time. The 100 most pristeen stations in the world, unmoved, no time of day changes, totally rural, un homogenized, no station type changes, would give a more real picture of the global temperature trend than all of this super adjusted and manipulated nonsense.

  39. This is yet another demonstration of how WUWT is the new (and far superior) normal for
    “Peer Review”. This is the the open, global, immediate and unlimited way to scrutinize that which needs scrutinizing. Resulting in the removal of any potential for the snow jobs and pal review produce by the now obsolete Journal “Peer Review” .

    This is what science in all forms should strive to achieve for the sake of the highest quality outcomes. The web provides a level of participation which cannot be marginalized by those clinging to what they hoped to continue controlling to benefit their interests at the expense of progress.

  40. Tilo,
    I have often wondered the same. It would surely give an accurate record. Looking at some remote Australian stations ie in country areas where growth has been minimal, the record shows either no warming or in a number of cases, slight cooling. Obviously this is a real problem for a warmist organisation like our BoM so they adjust until they get warming. When taken to task by a request to the Auditor general they scrapped that data series and started another. They must have felt threatened.

  41. Lawrie,

    You can’t get perfection. For example, peeling paint can change readings. But I think what I mentioned above would give us a truer picture. Several thousand readings with multiple layers of adjustments just have no value at all in my mind. And then Best actually came up with a result that there is no UHI effect. But many other studies have proven that there is by direct emperical comparison of cities and their immediate countryside. Heck I can confirm that on my car thermometer. In my mind BEST and GISS are a total waste of time for even more reasons than I have already given. We should move on and let the alarmist faithful cling to their security blankets.

  42. The bottom line is that “BEST” is using surface station data to try and prove a warming tend in climate of less than 1C. However the data is unfit for purpose. No amount of slicing, dicing or homogenisation can ever fix it.

    The egg is scrambled and no amount of extra time in the blender can unscramble it.

    Anthony’s surface stations project highlighted the extent of the problem. Without individual station metadata, there is no hope of applying the necessary corrections to individual station records. This data would have to record numerous micro and macro site factors for each station and span the full period of the station record. This data does not exist for a sufficient number of stations.

    We have well past the point in the climate debate where the attempted use of surface station data speaks to motivation. It now speaks very, very loudly.

  43. One of Zeke’s comment that struck me as curious was his saying that their new time series agrees with ‘reanalysis’ model output more than it does with other datasets.

    He concludes that this gives them more confidence that they are finding physically real phenomena.

    Now , to my ear, that says he has a basic belief that models are more accurate reflection of physical climate than the data they are supposed to be modelling and that if BEST is nearer to model output it is better than existing datasets.

    So now datasets are being created and the criterion of their merit is whether they get close to reflecting what models produce.

    ie More of make the data fit the models , not make the models fit the data.

    Problem !!

  44. Sorry, I do not believe this data. The Historical REAL data used up to the 1990s definitely showed the 1930/40s as being much warmer than they are shown in the current BEST (and GISS) data.
    I can see no justification for cooling history, it is an insult to the people who took those readings.

  45. Ryan Scott Welch says:
    Why did they start at 1850? Is it because it was the best year to start the record due to temperature measuring sites, quality of data, or because that was the end of the Little Ice Age?

    ===

    It is also the part of the climate record which nicely fits a quadratic (or exponential) rise:

    http://climategrog.wordpress.com/?attachment_id=746

    In fact the earlier part could also fit the same quadratic but not one that can be linked to AGW.

    This is why all the “bias corrections” reduce earlier variability and cool the 19th century.

  46. The BEST data is in fairly good agreement with other sources since 1910.

    http://www.woodfortrees.org/plot/hadcrut4gl/mean:180/mean:149/mean:123/from:1910/plot/crutem4vgl/mean:180/mean:149/mean:123/from:1910/plot/best/mean:180/mean:149/mean:123/from:1910/plot/gistemp/mean:180/mean:149/mean:123/from:1910

    Fig 1. HadCrut4, CruTem4,, BEST, GisTemp Gaussian low pass filtered (since 1910)

    There is very poor agreement between any of the data sets prior to this time however.

    http://www.woodfortrees.org/plot/hadcrut4gl/mean:180/mean:149/mean:123/plot/crutem4vgl/mean:180/mean:149/mean:123/plot/best/mean:180/mean:149/mean:123/plot/gistemp/mean:180/mean:149/mean:123

    Fig2. HadCrut4, CruTem4,, BEST, GisTemp Gaussian low pass filtered (whole record).

    Unless there is some good explanation as to why this relationship breaks down at that point in time then the data and results from that data must be in question.

    The question as to why the well observed ~60 year cycle disappears in the early part of the BEST record is worth answering I think.

  47. Few days ago Steve Mosher said somthing along the lines ” …we do not create gridded data , there is raw data that is crap , we slice it an locate brace and then calculate a field ….. ” in an answear to someone ( not sure if it was here or at on another site ) who was critical of the BEST project methodology.
    I had been fooling a round with older data from the weather station in a place named Stykkishólmur in Iceland , monthly data back to 1830 are available for that station on the Iclandic Meterologigal Offiice, and had the portion covering 1830-1948 (incl) in a file on my desktop, so I went to the BEST website and pulld down their data for that station for a quick comparision. Browsing through both timeseris side by side showed that the the monthly averages were in good agreement most of the time and in fact had the same value and with the same single decimal precision. However i also noticed that there were few divergences and all of the seemed to be caused by there being a different sign on the BEST and IMO values ( absolute values for both agreed ) for the months in question, so I filtered those instances out. Here below is a table of the the result ( cross my fingers and hope wordpress does not mangle the formatting )

    Comaparision of Iceland Met Office Data for Stykkishólmur
    to BEST project raw input data Data for same station.
    period 1830 to 1948. ( all temperature values are in °C)
    Raw Raw Data
    BEST IMO difference Bias Divergence
    Year Month Temp Temp IMO-BEST direction Num
    1884 3 -1.6 1.6 3.2 Lower BEST 1
    1905 3 -1.0 1.0 2.0 Lower BEST 2
    1908 3 -0.3 0.3 0.6 Lower BEST 3
    1911 3 -0.3 0.3 0.6 Lower BEST 4
    1918 3 -0.3 0.3 0.6 Lower BEST 5
    1922 1 -0.1 0.1 0.2 Lower BEST 6
    1922 3 -0.9 0.9 1.8 Lower BEST 7
    1923 3 -3.7 3.7 7.4 Lower BEST 8
    1924 1 -0.3 0.3 0.6 Lower BEST 9
    1926 1 -0.2 0.2 0.4 Lower BEST 10
    1927 3 -2.0 2.0 4.0 Lower BEST 11
    1928 3 -1.3 1.3 2.6 Lower BEST 12
    1929 1 -1.8 1.8 3.6 Lower BEST 13
    1929 3 -5.4 5.4 10.8 Lower BEST 14
    1932 3 -1.7 1.7 3.4 Lower BEST 15
    1933 1 -0.9 0.9 1.8 Lower BEST 16
    1933 3 -0.4 0.4 0.8 Lower BEST 17
    1935 1 -1.6 1.6 3.2 Lower BEST 18
    1935 3 -1.8 1.8 3.6 Lower BEST 19
    1939 3 -1.9 1.9 3.8 Lower BEST 20
    1940 1 -0.6 0.6 1.2 Lower BEST 21
    1941 3 -0.3 0.3 0.6 Lower BEST 22
    1942 1 -1.5 1.5 3.0 Lower BEST 23
    1942 3 -1.3 1.3 2.6 Lower BEST 24
    1944 3 -0.6 0.6 1.2 Lower BEST 25
    1945 3 -2.9 2.9 5.8 Lower BEST 26
    1946 1 -2.5 2.5 5.0 Lower BEST 27
    1946 3 -1.5 1.5 3.0 Lower BEST 28
    1947 1 -3.0 3.0 6.0 Lower BEST 29
    1948 3 -2.8 2.8 5.6 Lower BEST 30

    So a total of 30 monthly values were diffrent 29 of the in the period from 1905 to 1948, diffrences being as low as 0.2 to 10.8 degrees and all either occurring in either the 1. or 3.
    month of the year and the bias beeing unidirectional and so that BEST is always has the negative value. Rather strange is it not ?.
    The BEST data table has columns for failde quality checks (QC fail) and continuity brakes
    were I belive a value diffrent from zero indicates that somthing is awray or somthingh and
    in all those instances above it had those columns had zero values for all the lines above, which I to as a sign that the the BEST codes had accepted their raw input as god and valid.
    Followed by those two QC and Continuity breaks columns in the BEST data table there are columns for adjusted temperatures and the corresonding calculated anomaly , and also something they call regional expectations also giving both a themperature value and a correponding anomaly.
    I did not go much farther with this, just tok the sums of the divergence of the temperature the columns, an verified that the BEST routines carry most of it through to their final anomaly result.
    The sum of the above diffrences for those 30 months is 89 and when that is diveded and rounded shows a that BEST’s monthly is 3°C cooler than IMO. It of course does not have any big effect on the average for the total of 1428 months coverd by the whole period. But if we discard prior to 1905 and look only at that period we have 528 months against 29 each of the 29 off also by ~3°C and adding the fact that the annual average is for the place in that periodis 3.3 degrees it gives us around 0.16°C lowering of the average by using the BEST data instead of IMO for the first half of the 20th century. That’s surely large enough to affect the localal trend at least.
    Now I also did a similar quick check for latter half of the 20th century data an this time the BEST was running hotter than IMO and the sum of the diffrences was twice as almost twice as great around 160 °C higher for and in all but 4 instances BEST was the carrying the higer valued one , but not so much because of a oppsite signs on the data. starting in 1949 BEST’s monthly data is given with 3 decimals precision , while IMO only gives one decimal and nearly every month has some difference , but each time it’s a small one, first decimal is mostly zeros and more uncommonly a 1 , never higher, so I suspect that BEST is simply using daily high-low temp data which is available for calculating daily averages and then monhtlies from that while IMO uses perhaps 3 or 6 evenly spaced in time values or perhaps the swedish type 6-12-6 + plus high and low for their average calculations , I not privy to whats in use but I think the high-low mean has not been the their base method for calculating the daily mean since long way back.
    But anyway this is but a single station and as such does not affect the whole picture much, it but it might also be said that if as we have here for the first half of the the 20th century, where possibly one out of every 19 monthly values is 3°C out of whack then in the worst case we might say that in the BEST database there could be (using Zekes quted 40000+ numer of stations , and say 250 year ( 1750-2000) continous history ) somthing like 120 million seriously crappy monthly means in their final values. ( I do not really think it is this serious, I am just pointing out that the possibility exist, and could not resist the poke {:-)} ).

    And also , I both know something of the history of this particular station f.x that is was started by the local merchant who it seem, had become intrested in meterology , and when it later became an offical weather station in the danish weather service register , they contracted the the job of keeping and maintaining it to the merchant and incorporated his older records into their register, and I have been told that he was a very meticolous recorder, and among other things he had not one but three then state of art thermometers ( one with a Reamur scale, another with Farenheit scale and the third one with a Celsius ) , and his records had a colum for each one. And I have the impression that the the old book have been thouroughly verified by some of the meterologists at the IMO who are intrested in the local weather history, so I while I agree with Mosher that raw data is in many ways a crap , I suspect that in this particular case at least the BEST raw data is maybe more of a crap than the offering from IMO, and perhaps the quality check procedures of his pet prjoect needs some revision and extensions , but that of course is a something of gut feeling , rather than a 100% verified fact. And I think when you have crappy input data ,that it is only of limitied value to use mechanical methods only to sort the out.

  48. Stephan Rasey says: ” Now, you take the scalpel and cut it somewhere in half with an unspecified breakpoint shift. You want to tell me that you can still estimate an 80 year cycle from two uncorrelated 20 year signals? You’d be lucky to get back to the 40 year signal you left on the cutting room floor.”

    I have the same reservations about the BEST method.

    I don’t think it is a coincidence that it shows less variability that other datasets. For example in their global dataset 1998 El Nino it barely noticeable.

    I wanted to investigate the BEST method for this effect when it first came out but they only provided massive files that could not even be loaded on a PC. Despite the claims of total openness, this effectively meant it was not available to be checked.

  49. The central panel of the posted (pdf version) is quite informative.

    NCDC map looks blurred, PRISM looks sharp and BEST looks flat.

    It’s interesting that with high temporal and spacial resolution they manage to remove so much detail.

  50. Where does UHI fit into all of this? The UHI effect is gradual and does not create a breakpoint.

    Is there a danger that the relatively few, genuinely rural sites are actually adjusted up to urban trends?

  51. The fact is global temperatures have been flat for the past 17 years. There is no warminge even using all the fiddled data from hadcrut giss etc.. The BEST project was just an exercize in futility trying to flog a dead horse just like all the adjusments done by GISS etc just look at Steven Goddard endless graphs of manipulated USA data. LOL RSS and Uha show no warming at all for tropics and SH ie no global warming since 1979 either, LOL

  52. Ryan Welch: I really don’t know that much about west Texas and the plains there. I’m not arguing that the climate hasn’t changed. However, it seems to me that the time period that you are referring to also coincides with the removal of large herds of large grazing animals. Is it possible that fencing, farming, and reduced numbers of hoofed animals caused a lot of what the farmers and their ancestors saw?

  53. “The fact is global temperatures have been flat for the past 17 years. There is no warminge even using all the fiddled data from hadcrut giss etc.. The BEST project was just an exercize in futility trying to flog a dead horse just like all the adjusments done by GISS etc just look at Steven Goddard endless graphs of manipulated USA data. LOL RSS and Uha show no warming at all for tropics and SH ie no global warming since 1979 either, LOL”

    1. yes the temperatures have been flatish for 17 years. people on relying on the accuracy of our method to make the ‘flat’ claim. get it? the whole claim that there is a pause DEPENDS on us doing things correctly.

    2. Goddard is wrong. He neglects to mention that

    A) the data in his comparisons are two entirely different datasets
    B) the results he compares use two differrent algorithms
    C) hansens results can be replicated using entirely different data and different methods

    3. We agree with UAH . thats the point of this poster. When you trust them you vindicate us

  54. “Paul Homewood says:
    December 19, 2013 at 3:17 am
    Where does UHI fit into all of this? The UHI effect is gradual and does not create a breakpoint.

    Is there a danger that the relatively few, genuinely rural sites are actually adjusted up to urban trends?

    ######################################

    1. You assume that UHI is gradual. That has never been established. The biggest ef
    2. In previous work we show that UHI in the US was something on the order of .04C
    per decade. That is not the point of this dataset. after minimizing the error in the weather
    field this bias is taken toward zero
    3. Since we agree with UAH and UAH has no UHI, you can draw a logical conclusion

  55. Mosher, your 40,000 number doesn’t make sense if “about two-thirds” of 75000 are precipitation only. You are also asking me to believe that GHCN Monthly has significantly fewer stations than GHCN Daily temperature stations. I need to see a link on that.

    ##############################################

    MORON.

    For the world after deduplication there are roughly 40,000 stations.

    1. GHCN Daily has about 75K stations, 2/3 of which are precipitation only
    of the 50K stations with temperature data 20K are in the US. ly Not all can be used
    because some have only a few months of data.

    2. GHCN Monthly has only 7k stations.

    http://cdiac.ornl.gov/epubs/ndp/ndp041/ndp041.html

    Or you could just use the free software I’ve provided to download both daily and monthly data.

    GHCN Monthly has fewer stations for historical reasons. That dataset will go away over time
    and I suspect a new monthly datset will be built from daily.

    ##################################################

    I would also very much like to see a distribution of the length of station records in the database and a distribution of the lenght of records after they have been through the scalpel.

    Download the datasets and compute. get off your ass

  56. “I wanted to investigate the BEST method for this effect when it first came out but they only provided massive files that could not even be loaded on a PC. Despite the claims of total openness, this effectively meant it was not available to be checked.”

    Before I actually Joined the best team I wrote software to download and use it on a PC. takes up about 2GB of memory. maybe you dont know what you are doing

  57. Ryan Scott Welch says:
    Why did they start at 1850? Is it because it was the best year to start the record due to temperature measuring sites, quality of data, or because that was the end of the Little Ice Age?

    ###########################

    These are CONUS sites. We took the record back as far as we could go given the method.

  58. “A C Osborn says:
    December 19, 2013 at 2:04 am
    Sorry, I do not believe this data. The Historical REAL data used up to the 1990s definitely showed the 1930/40s as being much warmer than they are shown in the current BEST (and GISS) data.
    I can see no justification for cooling history, it is an insult to the people who took those readings.

    ##############################

    1. The notion that the 30-40s were warmer comes from early work done by hansen
    2. That work is known to be wrong.

    A. Hansen Stitched together stations that were NOT the same station. This is called
    the ‘reference station method’ Stitching together multiple stations simply because they
    are with 20 km of each other is a mistake.

    B. The data use was corrupted by changes in methods of observations.

    Now of course we can go back to what the actual records were. We avoid using the contaminated data that hansen used. We avoid mashing together stations simply because they are close to each other. We avoiding using monthly data and go back to daily daily.

  59. Hi Bjorn,

    Here is the detailed Berkeley analysis of that station in Iceland: http://berkeleyearth.lbl.gov/stations/155466

    You can see that two breakpoints are detected; one empirically detected one ~1860 and a second associated with a documented station move around 1940. The net effect of these two breakpoint adjustments is to slightly decrease the trend relative to the raw data.

    Stephen Rasey,

    The answer is (A), we use 40,747 independent temperature sensors (though not all have long periods of coverage). You can find all individual records used on our website, and download the raw data: http://berkeleyearth.org/source-files

    Paul Homewood,

    UHI is important, and as far as we can tell seems to show up as breakpoints that are mostly removed by homogenization. In our recent JGR paper we looked in depth if it were possible for urban stations to be “spreading” warmth to rural stations during homogenization. We did a test where we only used rural stations (and ignored urban ones) when detecting and correcting for breakpoints. The results were by-and-large identical, indicating that this is not occurring in practice (though, interestingly enough, using only urban stations to homogenize all stations did introduce some spurious warming). You can find our paper here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/papers/hausfather-etal2013.pdf

    Greg,

    Berkeley has a lot of high-resolution spatial features for absolute temperatures and single month anomalies. Our trend fields are much smoother, but we argue that they reflect underlying climate changes, which are more regional than local. The idea that two locations within 50 kilometers of eachother might have warmed 1.5 degrees and cooled 1.5 degrees, respectively, over a 30 years period is rather hard to believe to be physically plausible. Its much more likely that biases due to station moves, TOBs changes, and instrument changes over the last 30 years added a large amount of noise relative to the (smaller) trend signal.

    • Zeke/Mosher I think your views of UHI are just as stunted as David Parker.

      From:

      Urban Heat Island Modeling in Conjunction with Satellite-Derived
      Surface/Soil Parameters
      JAN HAFNER AND STANLEY Q. KIDDER
      1998 Journal of Applied Meteorology

      They write:

      Further, the UHI effect is related to the problem of
      global warming. The majority of weather stations are
      in cities or nearby; thus, the temperature trend, which
      is the signature of global warming, can be influenced
      by urbanization. The effect of urbanization on global
      temperature trends have been investigated by, for ex-
      ample, Feng and Petzold (1988), Karl et al. (1988), and
      Karl and Jones (1989), among others. Karl and Jones
      (1989) found the urban bias in the U.S. climate records
      (1901–84) to be about 0.06C for the annual mean and 0.13C
      for the daily minima. This negated the U.S. global warming trend.

      “UHI is important, and as far as we can tell seems to show up as breakpoints that are mostly removed by homogenization. ”

      UHI isn’t a step function as Zeke asserts, that’s a siting or equipment change issue.

      UHI is a slow signal, separate from siting and equipment. It doesn’t show up in fixing breakpoints, the temporal resolution of that method is all wrong.

      I still think your paper on homengenization is crap. My findings suggest homogenization is nothing more than a data blender that really doesn’t fix all of the problems it purports to.

      Why try to salvage compromised data by mixing it with good data? In forensics, that would get thrown out of court. In climate science, entire careers are made trying to make use of compromised data. Gavin once said we could monitor 50 stations and get a useful global temperature. Why not simply focus on finding the BEST stations (Yes, pun intended) and using those instead of playing mud pie maker with mixed provenance data?

      BEST really isn’t doing anything truly new here, its just another take on the methods and mixed data already in use.

  60. I would like to seize this opportunity and pose some questions to Steven Mosher:

    Here is the record #1 and here is the record #2 from the BE surfacestation pool.

    I would like to ask Steven Mosher:

    Why the take on the records before 1950 is different for the two records?

    Why there are only 6 break point identified in the No. 1. record while there are 9 for the record No. 2?

    What is the justification for the 1 (record No. 1) and 3 (record No. 2) breakpoints identification early in the record before 1810.

    What stations exactly were used to define the regional expectation for this period (1775-1810) and to identify the purported bias in the two records for the period?

    How the method used justifies the two different results?

    Is it the “raw data” which is (to use your own word) “crap” here or is it the method?

    Why the BE uses the double-merged record from the Prague-Klementinum+Ruzyne+Libus (btw. the stations differ in altitude almost 200 meters!) instead of the original Klementinum record kept since 1775 uninterrupted until very present?

    To explain: because I’ve researched this before* and I have found discrepancies already with the GISS station pool, I know positively that IN REALITY the temperature time series in both the two particular records No 1 and No 2 before 1950 SHOULD BE IDENTICAL.

    That’s because in fact there doesn’t exist any record neither for the Prague-Ruzyne nor for the Prague-Libus stations before 1950 and in fact the part of the record No. 2 before 1950 is and must be the record from the Prague-Klementinum station – one of the rarest and most unique temperature time series in the history of the instrumental temperature measurements which goes back to 1700s (kept since 1771, uninterrupted since 1775) and is kept until today. (BTW, the BE erased warm decade 1790-1799 is established not only by the Klementinum temperature record.)

    ——————————–
    *I note just BTW that my 200+ years UHI bias estimation for Klementinum record linked above is extremely conservative and in magnitude very simmilar to that estimated in the related literature just for the period 1922-95.

  61. Paul Homewood says:
    December 19, 2013 at 3:17 am
    Where does UHI fit into all of this? The UHI effect is gradual and does not create a breakpoint.

    Is there a danger that the relatively few, genuinely rural sites are actually adjusted up to urban trends?

    ######################################

    1. You assume that UHI is gradual. That has never been established. The biggest ef
    2. In previous work we show that UHI in the US was something on the order of .04C
    per decade. That is not the point of this dataset. after minimizing the error in the weather
    field this bias is taken toward zero
    3. Since we agree with UAH and UAH has no UHI, you can draw a logical conclusion

    Steve

    As UAH starts in 1979, it does not automatically follow that we can ignore UHI prior to 1979.

    You make an interesting comment about the 0.04C/decade. According to NCDC, the US warming trend since 1895 is 0.07C/decade, so you are suggesting over half is down to UHI?

    Paul

  62. Zeke

    Here is the detailed Berkeley analysis of that station in Iceland: http://berkeleyearth.lbl.gov/stations/155466

    You can see that two breakpoints are detected; one empirically detected one ~1860 and a second associated with a documented station move around 1940. The net effect of these two breakpoint adjustments is to slightly decrease the trend relative to the raw data

    That’s interesting because the GHCN set adds about half a degree of warming on from 1965 for Stykkisholmur

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/6/62004013000.gif

    They seem to have been confused by the well documented sharp drop in Icelandic temperatures then, a time known in Iceland as the “sea ice years”. The GHCN algorithm seems to think this was a measurement error.

  63. Anthony Watts says:
    December 19, 2013 at 9:15 am
    “… UHI isn’t a step function as Zeke asserts, that’s a siting or equipment change issue.

    UHI is a slow signal, separate from siting and equipment. It doesn’t show up in fixing breakpoints, the temporal resolution of that method is all wrong.

    I still think your paper on homengenization is crap. My findings suggest homogenization is nothing more than a data blender that really doesn’t fix all of the problems it purports to.”

    Why try to salvage compromised data by mixing it with good data? In forensics, that would get thrown out of court. In climate science, entire careers are made trying to make use of compromised data. Gavin once said we could monitor 50 stations and get a useful global temperature. Why not simply focus on finding the BEST stations (Yes, pun intended) and using those instead of playing mud pie maker with mixed provenance data?

    BEST really isn’t doing anything truly new here, its just another take on the methods and mixed data already in use.”
    ++++++++++++++++
    Thank you for posting this. I’d been trying to get Mosher to fess up, but he will not collaborate on seeking truth. The test that should give any rational person pause is that.
    1) Urban areas show more warming than average
    2) Rural areas show less warming than average

  64. Steven Mosher says:
    December 19, 2013 at 8:32 am (Edit)

    “A C Osborn says:
    December 19, 2013 at 2:04 am
    Sorry, I do not believe this data. The Historical REAL data used up to the 1990s definitely showed the 1930/40s as being much warmer than they are shown in the current BEST (and GISS) data.
    I can see no justification for cooling history, it is an insult to the people who took those readings.

    ##############################

    1. The notion that the 30-40s were warmer comes from early work done by hansen
    2. That work is known to be wrong.

    A. Hansen Stitched together stations that were NOT the same station. This is called
    the ‘reference station method’ Stitching together multiple stations simply because they
    are with 20 km of each other is a mistake.

    Steve

    Is there any reason, or logic, why all these “errors of Hansen” all seem to have gone the same way, and significantly overestimated past temperatures? Surely, statistically, they would be likely to cancel out?

  65. @ Mosher et al , Best data;

    “You can have your opinion about the data but not your own data”, it appears you are making up data “40,000 station record segments or station fragments from about 7,000 weather stations” ?

  66. Paul Homewood says:
    December 19, 2013 at 9:59 am

    Zeke
    Here is the detailed Berkeley analysis of that station in Iceland: http://berkeleyearth.lbl.gov/stations/155466

    You can see that two breakpoints are detected; one empirically detected one ~1860 and a second associated with a documented station move around 1940. The net effect of these two breakpoint adjustments is to slightly decrease the trend relative to the raw data

    That’s interesting because the GHCN set adds about half a degree of warming on from 1965 for Stykkisholmur

    ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/products/stnplots/6/62004013000.gif

    They seem to have been confused by the well documented sharp drop in Icelandic temperatures then, a time known in Iceland as the “sea ice years”. The GHCN algorithm seems to think this was a measurement error.
    ++++++++
    History be damned. The past can be changed by creating new and improved literature. Some day, these folks will look back and reflect on their lives. They will have to come to terms with the deceit. If these people have a conscience they won’t be feeling any sense of self worth as their carbon 14 levels begin their half-life journey downwards.

  67. Sorry, Anthony, I can’t help chuckling at the way Zeke keeps referring to you as ‘So’ in his replies. (OK….I’ll go now….)

  68. “Hansen Stitched together stations that were NOT the same station.”

    There are no stations nearby each other for them to be stitched up based in the Arctic. The Arctic stations show the warming in the 1930s and 1940s were like it has been recently.

  69. Hi Anthony,

    I agree that not all UHI is step changes (there is likely a combination of trend biases and step changes). However, by comparing rural and urban stations we can bound the magnitude of UHI. Our paper found a bias of about 20% of the century scale min temperature trend in the raw/TOBs data that was eliminated post-homogenization. This is true even if you completely toss out urban stations and only use rural stations to homogenize and to construct a temperature record.

    We’ve looked in the past at well-sited stations (no detectable urbanization via MODIS, ISA, Nightlights, etc.) and found trends that are slightly lower than urban stations. Our poster at the AGU two years ago covered some of this work. UHI is certainly real, but its effect is relatively small compared to mean temperature rises, and it should in principle be detectable and removable.

    Modern networks like the CRN will help us determine going forward if actual temperatures are more comparable to homogenized datasets where breakpoints are detected and removed or raw datasets where they are not.

  70. Let’s use some real data about the region of Montreal Canada. 1953 and 2010 are possibly the warmest on record. 1949 and 2012 are also warm. 2009 and 1954 were colder with colder winters.

    Stations Distance, Elevation, GHCN
    McGill : ———- , 56.9m, CA007025280
    Airport : 13.8 km , 36.0m, CA007025250
    L’Assomption: 36.4 km , 21.0m, CA007014160

    1949 max, mean, min
    McGill : 12.1, 8.0, 3.8
    Airport : 12.3, 7.7, 3.0
    L’Assomption: 12.1, 6.7, 1.1

    1953 max, mean, min
    McGill : 12.5, 8.7, 4.9
    Airport : 12.9, 8.3, 3.7
    L’Assomption: 12.8, 7.1, 1.4

    1954 max, mean, min
    McGill : 10.4, 6.8, 3.3
    Airport : 10.5, 6.4, 2.2
    L’Assomption: 10.4, 5.1, -0.3

    2009 max, mean, min
    Airport : 11.2, 6.6, 2.1
    L’Assomption: 10.6, 5.3, -0.1

    2010 max, mean, min
    Airport : 12.7, 8.4, 4.1
    L’Assomption: 12.4, 7.3, 2.2

    2012 max, mean, min
    Airport : 13.3, 8.5, 3.7
    L’Assomption: 12.6, 7.2, 1.7

    McGill and L’Assomption have a difference of 1.5C during the 1950s. The airport has possibly warmed from more snow melting during the colder months. L’Assomption has colder winters and much less pavement around. The change associated with snow melting on streets and roofs could easily make a difference on distances greater than the height of the atmosphere, so the 2 sites could still be part of the same UHI.

    So how much time did it take for McGill to become 1.5C warmer than L’Assomption? How much warming does it suggest for the last 60 years? Don’t forget, L’Assomption already had some urban development in 1950 and America was discovered in 1492.

  71. @Zeke Hausfather at 11:11 am
    There are 40,000 stations, not 40,000 fragments of 7,000 stations. You can find them all here: http://berkeleyearth.org/source-files

    Your link is to a page with 14 separate datasets, I guess each with parallel of data files such as a site_stations.txt. Some records have -999 lat lons.

    Do you have a link to some intermediate files where there is one table of station sites with lat longs keyed on station ID and datasource?

  72. @Zeke Hausfather at 11:11 am
    Looking at just the Global Summary of Day TAVG and the Monthly Climatic Data of the world,
    The data_charicteristics file.txt.
    There are 27,233 sites from the merge of these datasets.
    But 10,267 have fewer than 60 “Unique times” (5 years)
    14,286 have fewer than 120 “Unique Times” (10 years)
    only 805 have greater than 600 “Unique Times” or 50 years.
    2885 sites have at least 360 Unique times (30 years) with less than 100 Missing values.

    40,000 stations? By some measure maybe, but not all stations are created equal. But are fewer than 20% of them shorter than 30 years? Half shorter than 10 years. These stats aren’t looking at the segments yet.

  73. Here is a map of the stations from
    the Global Summary of Day TAVG and the Monthly Climatic Data of the world,
    25,569 sites, colored by elevation. No distinction yet between stations with 1 year of records or 50 years. That will come tomorrow.

  74. Steven Mosher says:
    December 19, 2013 at 8:23 am

    “I wanted to investigate the BEST method for this effect when it first came out but they only provided massive files that could not even be loaded on a PC. Despite the claims of total openness, this effectively meant it was not available to be checked.”

    Before I actually Joined the best team I wrote software to download and use it on a PC. takes up about 2GB of memory. maybe you dont know what you are doing
    —-

    Care to share the source code so that others with possibly less prior knowledge can also use it?

  75. Zeke Hausfather says:

    December 19, 2013 at 11:11 am

    “There are 40,000 stations, not 40,000 fragments of 7,000 stations. You can find them all here: http://berkeleyearth.org/source-files

    Care to do a plot of number of unique ID’s against data lengths for those without serious database skills and resources?

  76. Stephen Rasey says:

    December 19, 2013 at 11:20 pm

    “No distinction yet between stations with 1 year of records or 50 years. That will come tomorrow”

    May I suggest a histogram of number of unique IDs against record length (in yearly buckets?)

    Now if we can only get a similar histogram for segment lengths (may have to be monthly!) as well then a much clearer picture may emerge.

  77. There are about 1200 USHCN stations, regarded as a long record, high quality dataset.

    What trends would we get if we just used them?

    By adding a whole load of lower quality, and potentially very dodgy, stations, aren’t we risking contaminating the good stuff?

    Is this a case of “more = less”?

  78. @RichardLH at 3:03 am
    May I suggest a histogram of number of unique IDs against record length (in yearly buckets?)
    How about a reverse cumulative Distribution of Number of Sites that exceed X number of years of data at each site?

    Data sets used: 3 of 14 TAVG from http://berkeleyearth.org/source-files
    data_characterization.txt files from:
    Global Summary of Day TAVG
    Monthly Climatic Data of the world
    GHCN Monthly Version 3

    There are three lines on this chart. The top line is a simple
    DCOUNT(database, 1, Unique_Time > [X axis Value]) in Excel.
    I have NOT yet validated for duplicate stations that might be in the 3 datasets, so it could be lower. With this line, there are 34,513 stations with at least one month. so Zeke’s 40,000 stations isn’t wrong. but there are only 20,140 stations will more than 120 Unique_Times (10 years) and 10,140 with more than 360 months (30 years).

    In my opinion, if you are going to discard the absolute value of the temperature reading and rely on the trend of temperatures, I think any station will less than 30 years will do more harm than good.

  79. Stephen: Well with a minimum 30 year span at an individual station and expecting >90% data coverage, when using all of the databases listed above and merged together, I get only 31,485 Unique IDs.

    If I extend that to >95% data coverage the number drops to 24,088 Unique IDs.

    That is from everything combined and, as you say, that is somewhat less than Zeke’s 40,000 stations with only what I would consider to be a reasonable data validation criteria.

  80. Continuation of 8:09 am
    As I said, there are three lines on this plot. The upper blue line is raw count based upon how many Unique_Time are greater than X.

    In the data_Characterization.txt files, there is a “Missing Values” column. There is a Begining data column and an Ending date column. Missing Values column should be about equal to the (months between beginning dates and ending dates) minus the Unique_Times column. Ok. So a 30 year station might have a gap of a month or two, no problem if you make reasonable assuptions that preserve long term trends. But what is the sense of the number of gaps in the record?

    The lowest red line distribution is the blue line with the added criteria that “# of Missing Values is less than 10% of Unique Values” or no more than 1 in 11 months are missing. Under this measure, only
    9,717 sites have at least ONE year of data,
    7,113 sites have at least 10 years of data,
    4,343 sites have at least 30 years of data with less than 36 months of gaps.

    I think a 10% gap rate is generous, but if you treat the data well, it can still deliver good climate data. (Mind you, slicing it up into bit size chunks does not qualify as “treating the data well”, but that is my opinion). But that “Less than 10% missing values” only leaves us with 7,113 sites with as few as 10 years of data.

    What if we relax the missing values criteria to 20%. It didn’t make much difference!
    I had to relax the missing values criteria to 50% of Unique_Times, I.e. 1 month in 3 is missing! to get the green line. With this very loose criteria, we get
    13,807 sites with at least 2 years of data,
    11,339 sites greater than 10 years of data,
    8,394 sites with 30 years of data.

    Compare the green and red lines. 11,339-7,113 = 3800 sites have more than 10 years of data, but is missing between 12 to 60 months (1 – 5 years worth) of data.

    @Paul Homewood at 3:28 am
    Is this a case of “more = less”?
    Yes. It is a case of sausage making that would turn Upton Sinclair’s stomach.

    I think this WUWT post and comment thread from March 31, 2011 is still on point.

  81. Correction to Rasey:12/19 11:03 pm
    40,000 stations? By some measure maybe, but not all stations are created equal. But are fewer than 20% of them shorter LONGER than 30 years? Half shorter than 10 years. These stats aren’t looking at the segments yet.

    (sorry, it was well after midnight.)

  82. RichardLH at 8:31 am
    Well with a minimum 30 year span at an individual station and expecting >90% data coverage, when using all of the databases listed above and merged together, I get only 31,485 Unique IDs.

    If I extend that to >95% data coverage the number drops to 24,088 Unique IDs.

    I’m still using the three: GHCN Monthly V3, Global Summary of the Day, Monthly Climate Data of the World. When I do a DCOUNT( database, ID, (Unique_Times greater than X AND Missing Values Less than X)), i.e. 50% data coverage from begining to end of station life, I get a maximum of 15,051 at 2 years, pretty flat at 14,500 to 13,995 at 17 years, then it gradually merges with the blue.

  83. @Jeff Id 9:24 am.
    Do you know where there is a FIRST look at jackknife confidence intervals?

    What parameter do you want to analyze? There are thousands of degrees of freedom in the BEST process, because each breakpoint changes the jackknife of every other parameter.

    The answer for me is no. That would appear to me to be a monumental calculation if you had all the data in memory. What are your ideas?

  84. Zeke Hausfather says:
    December 19, 2013 at 8:55 am
    Mario Lento,
    Reanalysis products use weather models (not climate ones) fed by observations to estimate changes in temperature, precipitation, etc. at both the surface and various levels of the atmosphere. You can find more about them here: https://climatedataguide.ucar.edu/climate-data/atmospheric-reanalysis-overview-comparison-tables
    +++++++++
    Thank you Zeke: I’m a process control engineer. Typically I try to back up and see the big picture before delving into the process control science and engineering. Before drilling down to the details, I try to sort out what’s being done.

    From what I read, the BEST results are much more consistent with weather “models” than (observations?).

    That the BEST results agree with the models, is evidence that the BEST (version of the) temperature data were good? In conclusion, the models prove the BEST results were right?

    My understanding is that the “models” are programmed show that CO2 is the driver to warming and that these models show more warming than observations. So a product (the models) that shows more warming than observations agrees with BEST versions of data means the BEST versions of data also show more warming than observations. Right?

    Can I conclude that if temperature readings from the urban areas were completely removed and only rural areas were observed, the results would show less warming, and no longer fit the CO2 tuned models that show warming that is not observed.

    Pardon all the ways I am trying to state this, but the big picture can sometimes be obscured by the details.

  85. @RichardLH at 8:31 am
    Well with a minimum 30 year span at an individual station and expecting >90% data coverage, when using all of the databases listed above and merged together, I get only 31,485 Unique IDs.

    RE: Stephen Rasey 8:47 am
    The lowest red line distribution is the blue line with the added criteria [AND] that “# of Missing Values is less than 10% of Unique Values” or no more than 1 in 11 months are missing. Under this measure, only
    ….
    4,343 sites have at least 30 years of data with less than 36 months of gaps.

    So it looks like both of us were looking for something similar here: 30 years of station records or 360 Unique_Time value and no more than 36 months missing.

    We get very different results: your 31,485 to my 4,343.
    You say, “using all of the databases listed above and merged together”… Above where? All of the BEST data sources and not just the three I chose to experiment with? Were you using the data_characterization file or links to the actual dated temperature records? Is the problem that I missed a big database? If so, what are the likely candidate databases that have such full coverage. It is impossible to tell from the page Zeke links to the size and scope of the dataset without first downloading them, so some hints to the one or two that might be more complete than the three I used (GHCN Monthly V3, Global Summary of the Day, Monthly Climate Data of the World), would be helpful.

  86. Stephen; My fault really. I was quoting from a fully merged database of ALL of the Global data sets, TMAX, TMIN and TAVG. The TAVG alone values are much lower, as are values only using the smaller number of data sets that just cover the USA. I suspect that given the right query criteria we are talking the same numbers.

    I have extracted all of the “data_characterization.txt” and “site_detail.txt” from the various zip files into an in memory data set now, I just need to tease out all of the relevant fields so that the query criteria are easier to manage and quote.

    I think the biggest take home here is that number of long term data sets (say >150 years) of any real quality (i.e. 95% data coverage or better) is very, vey low. This poses the question of how valid it is to infill the rest of the Globe/USA at those longer time frames and thus derive an accurate long term temperature field. Surely the temperature field reconstruction must get less precise as the total number of reference points drops. The question is at what number does it become just a ‘guess’ rather than a ‘fact’?

    Now to tackle the “data.txt” files to get a wider picture. I think I will start with the longest records and work downwards.

    As a comment from the data archivist that lurks inside me, the referencing of 39 LATEST.zip files without versioning or other way of distinguishing between them is unlikely to be considered to be ‘best practice’ in a computing sense!. Fortunately a simple url parsing procedure produces a more logical set of local zip files (though still without versions :-( ).

  87. Stephen Rasey
    I believe there is a problem in the way Best calculates the CI, I have brought it to their attention multiple times without a single comment back. Not even a “we don’t agree” but the problem is real.

    The Jackknife calculation ‘damages’ the dataset and looks for shifts in the resulting CI. It’s a neat technique but the rescaling best does with the algorithm minimizes the outliers and violates the basic assumption of the jackknife principle. The resulting CI becomes a factor of the probability function (or distribution shape if you prefer). I was curious if they had addressed the problem yet.

  88. Stephen Rasey
    I will reword this a little better:

    I believe there is a problem in the way Best calculates the CI, I have brought it to their attention multiple times without a single comment back. Not even a “we don’t agree” but the problem is real.

    The Jackknife calculation ‘damages’ the dataset and looks for shifts in the resulting CI. It’s a neat technique but the rescaling best does with the algorithm minimizes the outliers and violates the basic assumption of the jackknife concept. The resulting CI is altered in unpredictable ways by the shape of the probability distribution of the temp series comprising the data. It probably isn’t a big deal but we don’t know because it certainly wasn’t accurately defined in the original paper and the authors have not addressed it as yet.

    I was curious if they had addressed the problem yet.

  89. The BEST description of “outliers” seems to correspond to a description of best data. Making BEST the worst.

  90. @RichardLH at 1:35 pm
    Have you found the files containing breakpoints, yet?
    When BEST inserts breakpoints 2-5 years apart, it is hard to believe anything that follows.

    I looked at DENVER STAPLETON AIRPORT. The raw data runs from 1873 to 2011. Berkley inserts break points at:
    1941, 1947, 1968, 1980, 1982, 1984, 1986, 1994, 1996, 1999. End of record in 2011.

    Contrast these tidbits of history from Wikipedia:
    Created in 1919,
    Opened in 1929 as Denver Municipal, Name changed after an expansion in 1944.
    Runway 17/35 and a new terminal building opened in 1964
    Concourse D in 1972.
    New North South runways in the 1980s. Concouse E in 1988.
    Closed in 1995 and operations moved to the new DENVER INTERNATIONAL AIRPORT about 15 miles further ENE on the open prarie, then the 2nd largest Airport in Colorado…. Stapleton had more gates and was closer to downtown. Government money at work!
    Now turned into an industrial park.

    Now, what station was contributing data to the record from 1995 to 2011.

    Berkley’s breakpoints seem to be inversely correlated airport changes. Go figure. I grew up there from 1960 to 1981 and I can tell you there is a lot of urban incroachment (UHI) over Stapleton’s history.

  91. @Zeke Hausfather 5:28 pm, 5:32pm
    Thanks, Zeke.

    Not only is the number of stations important, but the Length of the usable records is important. I am not the only one who would like to see a distribution of station lengths between breakpoints at points in time or across the whole dataset.

    Just above, I recounted some personal and documented changes to Stapleton (Denver,CO, USA). It is just one station (at least in theory), yet the station record has temperatures from before the station existed, after the station closed, and ten breakpoints, some as quick as 2 years in a record officially 130+ years long when the station itself probably existed only from after 1919 to 1995. That seems like an excessive number of breakpoints, especially when they don’t correlate will with documented airport expansion.

    So, can you give a more directed link to the breakpoint related tables instead of somewhere off page

  92. I have something of a major bone to pick with only presenting trend maps since 1979. Much of the American Southeast has seen a long term cooling trend-Warming in the last 30 years not withstanding. Presenting the data the way you do enhances the impression that everywhere warms and cools together-this is not the case.

  93. I have no idea why my comment has been completely ignored. It seems to be the only legitimate problem with the series.

    I feel like Oliver now. Is it rude?

  94. As it turns out the 40,000 station result is probably the answer to an improperly framed question.

    A better and possibly more relevant question to understanding the underlying accuracy of the BEST sampling methodology might be

    “How many of the 1 degree latitude and longitude grid cells have measured as opposed to estimated data in them and what is their Global and temporal distribution?”

    This is because this is a merge from separate data sets and a simple query on the combined result shows that what BEST calls ‘Station ID’ is not a unique identifier of a place but of a record from a published source.

    This means that querying the database for unique ‘Station ID’ over represents the number of actual inputs to the later method steps.

    To be fair they have never claimed otherwise but it can easily lead to a misunderstanding of the figures given.

  95. The answer to the first part of my question (Global distribution of grid cells with coverage) at first pass is

    GHCN Monthly version 3 TAVG
    detail lines 7429

    Global Summary of the Day TAVG – Monthly
    detail lines 24612

    GSN Monthly Summaries from NOAA TAVG
    detail lines 1060

    Hadley Centre _ CRU TAVG
    detail lines 5261

    Monthly Climatic Data of the World TAVG
    detail lines 2919

    Scientific Committee on Antarctic Research TAVG
    detail lines 256

    US Cooperative Summary of the Day TAVG – Monthly
    detail lines 3451

    US Cooperative Summary of the Month TAVG
    detail lines 13034

    US First Order Summary of the Day TAVG – Monthly
    detail lines 766

    US Historical Climatology Network – Monthly TAVG
    detail lines 1367

    World Monthly Surface Station Climatology TAVG
    detail lines 4795

    World Weather Records TAVG
    detail lines 2009

    1 degree grid cells with measured coverage = 9,495 of 129,600

  96. My apologies, that should read

    1 degree grid cells with measured coverage = 9,495 of 38,880

    to fairly reflect that this is Land only coverage. The previous post included Ocean cells which are not part of the BEST study of course.

  97. @RichardLH at 2:12 am, 3:16 am
    As it turns out the 40,000 station result is probably the answer to an improperly framed question.
    An interesting point. However, I think it has been clear that our question has always been, “How many stations and how long are the temperature records. Counting StationID has always been a quick and cheap first cut at an upper bound.

    You do ask a very good question of how many 1×1 grid cells have at least one StationID. 9,495 cells out of 38,880 is a bit pessimistic though since most high latitude cells will be unrepresented, but are much narrower than tropical cells. Still, it is a good first pass estimate.

    So next, do the census in bands of 15 degrees latitude.
    And then, how many cells have at least 30 years of data?

    There are a lot of good questions that can be asked.

    @2:53 am
    A very handy crib sheet. This ought to be on BEST’s data source page.

    BTW, have you loaded these files into an relational Database? If so, what flavor?

  98. Stephen Rasey says:

    December 21, 2013 at 11:50 pm

    ” However, I think it has been clear that our question has always been, How many stations and how long are the temperature records.”

    Indeed, but I think it does need to be clear in order not to fool yourself, even accidentally. If all of those records are in half the surface area then the claims made need to be less precise or at least more qualified.

    ” 9,495 cells out of 38,880 is a bit pessimistic ”

    and, of course wrong! Should be 180 * 360 * 0.29 = 18,792 though you could argue about the exact figure and what you count as a land cell. As to pessimistic, they are claiming that this is representative of the Global temperature figure by extrapolation of the data into cells with no measurements, so a handle on how much of that is done is, I think, valid and relevant information.

    These are cells with ANY information at all in them with no regard to quality or length to get to the 9,495 figure. The truly useable cells will be a lot lower, especially when record lengths and any gaps are taken into account. I am trying to work out what would be the best way of describing how the percentage/quality coverage varies with time.

    “BTW, have you loaded these files into an relational Database? If so, what flavor?”

    An in memory C# set of tables keyed on Station ID. Bit of a mess right now as I have not fully parsed out the records, just kept them as a line of text and parsed on retrieval. More work still to do.

  99. @Stephen
    Well this is probably my last day looking at this. Xmas calls :-).

    Database input now nearly complete. So far we have (accumulative counts).
    The validation fails are slightly surprising! The Lat/Long fails are mainly from transposed columns. I assume that this does not leak through to the calculation stages.

    Import database: GCOS Monthly Summaries from DWD TAVG
    Data records 96950
    Data character records 1153
    Data flag records 1
    Flag records 96898
    Site comp details records 1153
    ****** Validation fail ********* ID: 42 ZHGONGSHAN LatUncertainty: 1.85000
    ****** Validation fail ********* ID: 86 RIO DE JANEIRO (GALEAO AE LatUncertainty:22.82500
    ****** Validation fail ********* ID: 92 ARICA (CHACALLUTA AERO) LatUncertainty:18.35833
    ****** Validation fail ********* ID: 93 CHARANA LatUncertainty:17.58333
    ****** Validation fail ********* ID: 95 LUBANGO (SA DA BANDEIRA) LatUncertainty:14.93333
    ****** Validation fail ********* ID: 115 CHACHAPOYAS LatUncertainty: 6.20833
    ****** Validation fail ********* ID: 85284 PUERTO CASADO LatUncertainty:22.28333
    ****** Validation fail ********* ID: 85285 WAGGA WAGGA AIRPORT LatUncertainty:35.16667
    ****** Validation fail ********* ID: 85286 MORUYA HEADS PILOT STATIO LatUncertainty:35.91667
    ****** Validation fail ********* ID: 85287 PUNTA ARENAS (CARLOS IBAN LatUncertainty:53.00833
    ****** Validation fail ********* ID: 85288 CUNDERDIN AIRFIELD LatUncertainty:31.62500
    ****** Validation fail ********* ID: 85289 UNIV. WISC. #8931 (MARILY LatUncertainty:79.95833
    Site Details records 1153
    Site FlagDefs records 8
    Site Flag records 1153
    Site Summary records 1153
    Source FlagDefs records 1
    Source records 96893
    Station change records 394

    Import database: GHCN Daily TAVG – Monthly
    Data records 5261798
    Data character records 16222
    Data flag records 22
    Flag records 5261694
    Site comp details records 16222
    Site Details records 16222
    Site FlagDefs records 9
    Site Flag records 1673
    Site Summary records 16222
    Source FlagDefs records 9
    Source records 5261684
    Station change records 394

    Import database: GHCN Monthly version 2 TAVG
    Data records 12071139
    Data character records 23502
    Data flag records 22
    Flag records 12070983
    Site comp details records 23502
    ****** Validation fail ********* ID: 44543 AMUNDSEN-SCOT LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 51722 SHIP N LatUncertainty: 5.00000
    Site Details records 23502
    Site FlagDefs records 9
    Site Flag records 1698
    Site Summary records 23502
    Source FlagDefs records 10
    Source records 12070968
    Station change records 394

    Import database: GHCN Monthly version 3 TAVG
    Data records 17234198
    Data character records 30782
    Data flag records 28
    Flag records 17233990
    Site comp details records 30782
    ****** Validation fail ********* ID: 51823 AMUNDSEN-SCOT LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 59001 SHIP N LatUncertainty: 5.00000
    Site Details records 30782
    Site FlagDefs records 9
    Site Flag records 1707
    Site Summary records 30782
    Source FlagDefs records 31
    Source records 17233970
    Station change records 394

    Import database: Global Summary of the Day TAVG – Monthly
    Data records 21680106
    Data character records 55245
    Data flag records 36
    Flag records 21679846
    Site comp details records 55245
    ****** Validation fail ********* ID: 6281 MOBILE UA STN ATLANT LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 131830 AMUNDSEN-SCOTT LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 131831 AMUNDSEN-SCOTT LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 131832 CLEAN AIR LatUncertainty: 5.00000
    Site Details records 55245
    Site FlagDefs records 9
    Site Flag records 3381
    Site Summary records 55245
    Source FlagDefs records 32
    Source records 21679821
    Station change records 394

    Import database: GSN Monthly Summaries from NOAA TAVG
    Data records 22374430
    Data character records 56156
    Data flag records 36
    Flag records 22374118
    Site comp details records 56156
    Site Details records 56156
    Site FlagDefs records 9
    Site Flag records 4016
    Site Summary records 56156
    Source FlagDefs records 33
    Source records 22374088
    Station change records 394

    Import database: Hadley Centre _ CRU TAVG
    Data records 26491184
    Data character records 61268
    Data flag records 36
    Flag records 26490820
    Site comp details records 61268
    Site Details records 61268
    Site FlagDefs records 9
    Site Flag records 4156
    Site Summary records 61268
    Source FlagDefs records 34
    Source records 26490785
    Station change records 394

    Import database: Monthly Climatic Data of the World TAVG
    Data records 26890092
    Data character records 64038
    Data flag records 36
    Flag records 26889676
    Site comp details records 64038
    Site Details records 64038
    Site FlagDefs records 9
    Site Flag records 6926
    Site Summary records 64038
    Source FlagDefs records 35
    Source records 26889636
    Station change records 394

    Import database: Scientific Committee on Antarctic Research TAVG
    Data records 26923756
    Data character records 64145
    Data flag records 36
    Flag records 26923288
    Site comp details records 64145
    ****** Validation fail ********* ID: 81315 Amundsen Scott LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 81316 Clean Air LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 81325 Byrd LatUncertainty: 5.00000
    Site Details records 64145
    Site FlagDefs records 9
    Site Flag records 7033
    Site Summary records 64145
    Source FlagDefs records 36
    Source records 26923243
    Station change records 394

    Import database: US Cooperative Summary of the Day TAVG – Monthly
    Data records 27051117
    Data character records 67443
    Data flag records 42
    Flag records 27050597
    Site comp details records 67443
    Site Details records 67443
    Site FlagDefs records 9
    Site Flag records 8072
    Site Summary records 67443
    Source FlagDefs records 38
    Source records 27050547
    Station change records 394

    Import database: US Cooperative Summary of the Month TAVG
    Data records 32156536
    Data character records 80328
    Data flag records 43
    Flag records 32155964
    Site comp details records 80328
    Site Details records 80328
    Site FlagDefs records 9
    Site Flag records 9842
    Site Summary records 80328
    Source FlagDefs records 39
    Source records 32155909
    Station change records 394

    Import database: US First Order Summary of the Day TAVG – Monthly
    Data records 32298284
    Data character records 80945
    Data flag records 49
    Flag records 32297660
    Site comp details records 80945
    Site Details records 80945
    Site FlagDefs records 9
    Site Flag records 9847
    Site Summary records 80945
    Source FlagDefs records 44
    Source records 32297600
    Station change records 394

    Import database: US Historical Climatology Network – Monthly TAVG
    Data records 33824769
    Data character records 82163
    Data flag records 50
    Flag records 33824093
    Site comp details records 82163
    Site Details records 82163
    Site FlagDefs records 11
    Site Flag records 11065
    Site Summary records 82163
    Source FlagDefs records 45
    Source records 33824028
    Station change records 394

    Import database: World Monthly Surface Station Climatology TAVG
    Data records 35332211
    Data character records 86790
    Data flag records 50
    Flag records 35331483
    Site comp details records 86790
    ****** Validation fail ********* ID: 60235 AWS: BYRD (8903) LatUncertainty: 5.00000
    ****** Validation fail ********* ID: 61620 SHIP STATION N OCEAN WEATHER S LatUncertainty: 5.00000
    Site Details records 86790
    Site FlagDefs records 12
    Site Flag records 14856
    Site Summary records 86790
    Source FlagDefs records 46
    Source records 35331413
    Station change records 925

    Import database: World Weather Records TAVG
    Data records 35538706
    Data character records 88650
    Data flag records 50
    Flag records 35537926
    ****** Validation fail ********* ID: 15507 DABO-SINGKEP WMOID :9.600000e+035
    ****** Validation fail ********* ID: 15519 SIMPANGTIGA-PEKANBARU WMOID :960000000
    ****** Validation fail ********* ID: 86671 KIJANG TANJUNG PINANG WMOID :9.600000e+009
    ****** Validation fail ********* ID: 86675 TAREMPA WMOID :9.600000e+036
    Site comp details records 88650
    ****** Validation fail ********* ID: 15507 DABO-SINGKEP WMOID :9.600000e+035
    ****** Validation fail ********* ID: 15516 PADANGKEMILING BENGKULU LatUncertainty: 3.76667
    ****** Validation fail ********* ID: 15519 SIMPANGTIGA-PEKANBARU WMOID :960000000
    ****** Validation fail ********* ID: 86671 KIJANG TANJUNG PINANG WMOID :9.600000e+009
    ****** Validation fail ********* ID: 86675 TAREMPA WMOID :9.600000e+036
    Site Details records 88650
    Site FlagDefs records 12
    Site Flag records 14924
    Site Summary records 88650
    Source FlagDefs records 47
    Source records 35537851
    Station change records 925

    1 degree Lat/Long grid cells with any coverage : 9660

  100. Zeke says in the beginning of the video that stations show patterns that are not ‘thermodynamically’ plausible.

    This is data peeking, unless explicitly shown otherwise.

Comments are closed.