An analysis of BEST data for the question: Is Earth Warming or Cooling?

Guest essay by Clyde Spencer

The answer to the question is, “Yes!” Those who believe that Earth is going to Hell in a hand basket, because of anthropogenic carbon dioxide, go to extraordinary lengths to convince the public that uninterrupted warming is occurring at an unprecedented rate. One commonly reads something to the effect that the most recent year was the xth warmest year in the last n years (use your personal preferences for x and n), or that the last n years have been the warmest in the last m years. It is common for NOAA to make claims that current temperatures are higher than some previous year by an amount that is of the same order of magnitude as the uncertainty in the temperature of the year being compared to. [For an extended discussion and analysis of the veracity of these kinds of claims, go to this link: http://www.factcheck.org/2015/04/obama-and-the-warmest-year-on-record/ ] I’d like to start off by examining the logical fallacy of the common idea that these pronouncements support the idea of continued warming. They only provide evidence for it currently being warm!

Let’s conduct a simple thought experiment that most can relate to. Imagine that you have a pot of water on the stove at room temperature. You place a thermometer in the water, take a reading, and turn on the heat. We’ll monitor the increase in temperature by taking frequent readings at fixed intervals. Assume that the thermometer is calibrated in tenths of a degree, and we’ll try to read it to the nearest ½ of a tenth. Therefore, we can expect that there will be some random errors in the reported temperature because of observation errors. If the pot is not well stirred, some stratification may occur that will further obscure the true average temperature. We can expect to see a steady, approximately linear increase in temperature until the water is nearly at the boiling point. The pot is removed from the heat, and readings are continued as before. We can expect that the water in the pot will cool more slowly than it heated, the rate depending on such factors as the surface-to-volume ratio, the room temperature, and the material of which the pot is constructed. In any event, we can expect that the temperature readings will not change much, if any, for the first couple of readings. Subsequent readings may or may not be lower because of the random errors mentioned above. Eventually, we will get a reading that is obviously lower than when we removed the pot from the heat. A subsequent one could be slightly higher because of a reading error. If we were to stop at that point, we could make such statements as, “The last n number of readings are higher than the average of all previous temperatures, which proves that the water is still heating.” Or, “The last n readings are the highest ever recorded;” another classic, for one of the last readings which had a random error, “The probability that the last reading is higher than all other temperatures is 38%.” We know very well that the pot is no longer heating, and it is just sophistry to try to make it appear that it is.

Something I find peculiar about modern climatology is the use of so-called temperature anomalies. While not unheard of in other disciplines, there are usually good reasons, such as for simplifying a Fourier analysis of a time series. One of the issues of using anomalies is that if a published graph is reproduced, and separated from the metadata in the text of the article, then one is at a loss to know what the anomalies mean; they lose their context. Another issue is that the authors are free to choose whatever base period they want, which may not be the same as others, and it makes it difficult to compare similar analyses. The psychological impression conveyed is that (recent) data points above the baseline are extraordinary. Lastly, the use of anomalies tends to influence the subjective impression of the magnitude of changes because very small changes are scaled over the full vertical range of the graph. See Figure 2 below, which shows actual temperatures, for a comparison to the anomalies that you are used to seeing in the literature.

In the recent NOAA paper by Karl et al. (2015), the authors decided to adjust modern ocean buoy temperatures upward to agree with older, problematic, engine-room water-intake temperatures. The decision to adjust high quality data to agree with lower quality data is, at best, unorthodox. The authors did not give a good reason for the decision. As one defender remarked, whether one adds temperatures to the anomalies on the right or subtracts them on the left, the slope stays the same. True, but the result is to have a higher ending-temperature than if the more orthodox approach was taken. Supporters of anthropogenic global warming are then ‘justified’ in claiming an uninterrupted increase in recent temperatures and any claimed pause in warming is an illusion.

I take exception to the practice of conflating Sea Surface Temperatures (SST) with land air temperatures. There are several issues with this practice. While a weak excuse is, there is a strong correlation between SST and nighttime air temperatures, that is hardly justified with modern instrumentation. The biggest problem is that the heat capacity of water is so high that water exhibits strong thermal inertia. That means, warm water influenced by contact with the air will always lag colder air temperatures. Thus, even if Earth were to enter a cooling phase, water would be the last to provide evidence for it. Because the theory behind so-called ‘greenhouse warming’ predicts that the air should heat first (or more properly, cool more slowly), the most sensitive indicator of changes will be found in air temperatures. Using ocean temperatures is analogous to using subsurface land temperatures and averaging them with land air temperatures. At relatively shallow depths in the soil, the diurnal temperature changes are smoothed out and, at greater depths, even the seasonal effects are eliminated. However, we don’t average subsurface ground-temperatures with land air-temperatures! Why should we average SSTs with land air-temperatures? It is a classic example of comparing apples and oranges. SSTs are of interest and provide climate insights, but they should not be averaged with air temperatures!

Lastly, global averages of all temperature readings typically are reported instead of the high and low temperatures. This is important because the highs and lows behave differently and the lows should be a better indicator of the impact of the so-called ‘green house’ effect.

clyde-spencer-fig1

Fig 1.

Figure 1, above, which shows the differences between the high and low temperatures from the Berkeley Earth Surface Temperature (BEST) data, appears to reflect some abrupt transitions in the behaviors of the two temperatures. My interpretation of Figure 1 is that between about 1870 and 1900, neither the high nor the low temperatures were changing systematically. Then, between about 1900 and 1983, the low temperatures were increasing more rapidly than the high temperatures, causing a decline in the differences. This is what I would expect for a ‘greenhouse’ signal. However, since 1983, it appears that the high temperatures have been increasing more rapidly than the lows, resulting in a steep increase in the difference in the temperatures. I don’t believe that this has been reported before and is begging for an explanation since it isn’t something I would expect from carbon dioxide and water vapor alone.

This brings us to the point of my expanded analysis of the BEST temperature data set. Figure 2, below, shows the high and low temperatures for the period of 1870 to mid-2014. The data set starts earlier than 1870, but the uncertainty is so great in the early data that I didn’t feel it contributed much. [Should the reader be interested, there is a graph of land temperature data starting about 1750 at this link: http://berkeleyearth.lbl.gov/regions/global-land ] The main thing worth noting is that the high temperatures were increasing rapidly in the two decades before my graphs start and the lows were coming down from a high in about 1865. The pastel shading reflects the 95% uncertainty range, which becomes imperceptible by the present day. The green, smooth line is a 6th-order polynomial fit of monthly temperature data that have been smoothed. Rather than attempt any further smoothing of the once-smoothed data, I chose to model the low-frequency response with a polynomial least-squares fit trend-line. This approach to characterizing recent temperature changes is more sophisticated than drawing straight lines through the data, where one is free to choose the start and stop times subjectively; subjective time-periods allow for conscious or unconscious mischief.

clyde-spencer-fig2

Fig. 2.

The 6th-order fit captures nearly 80% of the variance in the high-temperature data. It notably doesn’t do an optimal job of capturing the transient warming events around 1878 and 1902, or the broader warming event of the 1940s. Visually, the 6th-order fit seems to be doing a good job of characterizing the data from about 1950 to the present day, which is important for the question at hand, which is whether we are still experiencing warming. Similarly, the 6th-order fit captures more than 89% of the variance in the low-temperature data; visually, the fit appears superior to that for the high-temperature data. By comparison with a graph generated with the BEST long-term smoothed data, these regression curves are smoother than the 20-year moving average; however, they are similarly shaped. Although, the point of this exercise isn’t to smooth the data.

It is easy to take the first-derivative of a polynomial function and obtain quantitative values for the slope (tangent) of the temperature-curve versus time. That is, one can obtain annual values of the warming rate for every month for both the high and low-temperature global averages.

In order to pick up the last six months of 2014, which are missing from the 12-month smoothed data, I repeated the above analysis with the un-smoothed monthly data. There were no surprises other than the fact that the extrapolation of the last six-months of 2014-slopes for the smoothed, data were nearly identical to the slopes of the un-smoothed monthly data; the differences are trivial. I say that the results are similar for the last 6 months is surprising because all too often when one tries to extrapolate a polynomial fit beyond the actual data, the curve diverges abruptly! The polynomial coefficients are very similar for both the smoothed and un-smoothed data. The only advantage to showing the un-smoothed monthly data would be to emphasize how much noisier it is than the smoothed data. For brevity, I have omitted the additional graph. Polynomial regressions of lower order gave lower coefficients of determination (R2) and, subjectively, are poorer fits visually.

Let me summarize what the slopes tell us about the temperature records with the tables below. I’ve listed the approximate years when the highs and lows had zero slope (no warming), maximum slope (maximum warming/cooling, point of inflection on the curve), and what has been happening most recently. The slopes are in degrees Celsius change per year. Examine Figure 2 to verify what I’m saying.

High Temperatures

Year 1870 1875 1883 1896 1917 1943 1956 1968 1998 2013 2014
Slope 0.020 0 -0.008 0 0.015 0 -0.005 0 0.039 0 -0.012
Character Peak Inflection Point Trough InflectionPoint Peak Inflection Point Trough Inflection Point Peak
Temperature Increasing High Changing Low Changing High Changing Low Changing High Decreasing

Low Temperatures

Year 1870 1876 1890 1916 1954 1994 2010 2014
Slope  -0.006 -0.011 0 0.019 0.004 0.033 0 -0.037
Character Inflection Point Trough Inflection Point Inflection Point Inflection Point Peak
Temperature Decreasing Changing Low Changing Changing Changing High Decreasing

For the high-temperature series, the slopes start at a rate of about 0.020°C per year in 1870, decline to 0 about 1875 (temperature-high), become negative and reach -0.008°C per year by 1883. The slopes then change direction, become zero about 1896 (temperature-low), increase to 0.015°C per year by 1917, then again decline, reaching zero about 1943 (temperature-high). The slopes continue to decline until about 1956 (point of inflection), reverse direction and reach zero again about 1968 (temperature-low). This now is the beginning of the much heralded ‘modern warming,’ reaching a maximum of about 0.039°C per year in 1998, and then declining to zero (temperature-high) in 2013. That is to say, the rate of warming started to decline about 1998. The time series closes out 2014 at a rate of -0.012°C per year. The average warming rate for the period 1870 through 2014 was 0.9°C per century.

The low-temperature slopes follow a similar pattern: Starting in 1870 with a rate of -0.006°C per year, reaching a minimum of about -0.011°C per year in 1876, reversing direction (point of inflection), reaching a slope of zero in 1890 (temperature-low), and then increasing to a maximum (point of inflection) of over 0.019°C per year in 1916. The slopes then decline to about 0.004°C per year in 1954; they then climb to a maximum of almost 0.033°C per year in 1994 (point of inflection). The slope then decreases to zero (temperature-high) in late-2009, and becomes negative for the remainder of the record, ending 2014 with a slope of -0.037°C per year! That is to say, the rate of warming started to decline about 1994. The average warming rate for the period 1870 through 2014 was 1.1°C per century.

Thus, the low-temperature averages have been increasing slightly more than the highs, which is what I would expect from reduced radiative cooling at night and in the Winter. However, there isn’t a big difference between the two and there is a need to explain the recent anomalous increase in the high temperatures (post-1983) as shown in Figure 1. To put this into perspective, the 144-year temperature increases have been less than the 95% uncertainties of the monthly temperatures in 1870! There is an abrupt change in temperature differences around 1950, as shown in Figure 1; a close examination of Figure 2 suggests that there is also an abrupt change in the temperature uncertainties about the same time. This is something that needs explanation. Something to consider is whether the predictions of future heat waves are reliable, given that it is the low temperatures that have shown the greatest and most consistent increase over the last 125 years. Furthermore, it has been claimed that warming in the Arctic is at least twice the rate of the rest of the globe (Screen, et al., 2012), biasing the global averages upward. Is it reasonable then to expect heat waves at mid-latitudes from extrapolating global averages?

In summary, Fig. 2 does not support a claim that 2014 had the highest high or low temperatures in modern times, and the analysis suggests we are currently in a cooling phase, not just a plateau.


 

References

Estimated Global Land-Surface TMAX based on the Complete Berkeley Dataset: http://berkeleyearth.lbl.gov/auto/Global/Complete_TMAX_complete.txt

Estimated Global Land-Surface TMIN based on the Complete Berkeley Dataset: http://berkeleyearth.lbl.gov/auto/Global/Complete_TMIN_complete.txt

Karl, Thomas R., Anthony Arquez, Boyin Huang, Jay H. Lawrimore, James R. McMahon, Matthew J. Menne, Thomas C. Peterson, Russel S. Vose, and Huai-Min Zhang, (2015); Possible artifacts of data biases in the recent global surface warming; Science 26 June 2015, Vol. 348 no. 6242 pp. 1469-1472: https://www.ncdc.noaa.gov/news/recent-global-surface-warming-hiatus

Screen, J. A., C. Deser, and I. Simmonds, (2012); Local and remote controls on observed Arctic warming, Geophy. Res. Lett., Vol. 39, L10709, doi:10.1029/2012GL051598: http://onlinelibrary.wiley.com/doi/10.1029/2012GL051598/pdf

Advertisements

143 thoughts on “An analysis of BEST data for the question: Is Earth Warming or Cooling?

  1. “This is what I would expect for a ‘greenhouse’ signal. However, since 1983, it appears that the high temperatures have been increasing more rapidly than the lows, resulting in a steep increase in the difference in the temperatures. I don’t believe that this has been reported before and is begging for an explanation since it isn’t something I would expect from carbon dioxide and water vapor alone.”

    read the literature. AR5
    section 2.4.1.2 Diurnal Temperature Range

    No dedicated global analysis of DTR has been undertaken subsequent to Vose et al. (2005a) although global
    behaviour has been discussed in two broader ranging analyses. Rohde et al. (2012) note an apparent reversal
    since the mid-1980s; with DTR subsequently increasing. This decline and subsequent increase in DTR over
    global land surfaces is qualitatively consistent with the dimming and subsequent brightening noted in Section
    2.3.3.1. Donat et al. (2013c) using HadEX2 (Section 2.6) find significant decreasing DTR trends in over half
    of the land areas assessed but less than 10% of land with significant increases since 1951. Available trend
    estimates (–0.04 ± 0.01°C per decade over 1950–2011 (Rohde et al., 2012), –0.066°C per decade over 1950–
    2004 (Vose et al., 2005a), are much smaller than global mean LSAT average temperature trends over 1950–
    2012 (Table 2.4). It therefore logically follows that globally averaged maximum and minimum temperatures
    over land have both increased by in excess of 0.1°C per decade since 1950.
    Regionally, Makowski et al. (2008) found that DTR behaviour in Europe over 1950 to 2005 changed from a
    decrease to an increase in the 1970s in Western Europe and in the 1980s in Eastern Europe. Sen Roy and
    Balling (2005) found significant increases in both maximum and minimum temperatures for India, but little
    change in DTR over 1931–2002. Christy et al. (2009) reported that for East Africa there has been no pause in
    the narrowing of DTR in recent decades. Zhou and Ren (2011) reported a significant decrease in DTR over
    mainland China of –0.15°C per decade during 1961-2008.
    Various investigators (e.g., Christy et al. (2009), Pielke and Matsui (2005), Zhou and Ren (2011)) have
    raised doubts about the physical interpretation of minimum temperature trends, hypothesizing that
    microclimate and local atmospheric composition impacts are more apparent because the dynamical mixing at
    night is much reduced. Parker (2006) investigated this issue arguing that if data were affected in this way,
    then a trend difference would be expected between calm and windy nights. However, he found no such
    minimum temperature differences on a global average basis. Using more complex boundary layer modelling
    techniques Steeneveld et al. (2011) and McNider et al. (2012) showed much lower sensitivity to windspeed
    variations than posited by Pielke and Matsui but both concluded that boundary layer understanding was key
    to understanding the minimum temperature changes. Data analysis and long-term side-by-side
    instrumentation field studies show that real non-climatic data artefacts certainly affect maximum and
    minimum differently in the raw records for both recent (Fall et al., 2011; Williams et al., 2012) and older
    (Bohm et al., 2010; Brunet et al., 2011) records. Hence there could be issues over interpretation of apparent
    DTR trends and variability in many regions (Christy et al., 2006; Christy et al., 2009; Fall et al., 2011; Zhou
    and Ren, 2011; Williams et al., 2012), particularly when accompanied by regional-scale Land Use / Land
    Cover (LULC) changes (Christy et al., 2006).
    In summary, confidence is medium in reported decreases in observed global Diurnal Temperature Range
    (DTR), noted as a key uncertainty in AR4. Several recent analyses of the raw data on which many previous
    analyses were based point to the potential for biases that differently affect maximum and minimum average
    temperatures. However, apparent changes in DTR are much smaller than reported changes in average
    temperatures and therefore it is virtually certain that maximum and minimum temperatures have increased
    since 1950.

    ######################################
    The various temperature series disagree.

    It’s active research

      • AR5 is a bother for that hidden end of line problem.
        I get it all the time when I quote it at the Guardian.
        That place also has no re-edit function.

      • In order to avoid outright plagiarism, one needs to ensure ones “original text” passes a test for plagiarism from one of the many tools available. (here’s links to a few http://elearningindustry.com/top-10-free-plagiarism-detection-tools-for-teachers)
        If you copy a Word document into a text editor you will remove the meta-code which identifies the original (or last) Author
        in. the word-smith industry this process is called “spinning”.

        If you want to simply cut and paste verbatim, it is normally not legally enough to simply mention the original author in a footnote – better to ask permission..

      • Palmer

        ‘In fairness, it was not his own. Also, he was too lazy to properly reformat the quote he pasted.”

        The author of the dang post is too lazy to read THE SCIENCE and You want to play comment nanny?

        precious.

      • Sorry, Mosh, 50% of science is about presentation and communication.

        If you cannot be bothered to communicate effectively, then don’t blame others if we simply skip your meaningless drivel and regard you as the equivalent of a common-or-garden troll.

        R

    • .

      Rohde et al. (2012) note an apparent reversal since the mid-1980s; with DTR subsequently increasing.

      Which once again coincides the cooling of the lower stratosphere caused by El Chichon and Mt Pinatubo and complementary warming of the lower atmosphere.

      … discussed here:
      https://climategrog.wordpress.com/?attachment_id=902

      Did the natural processes that flushed volcanic dust and aerosols also clean up some of the accumulated anthropogenic pollution? ( NB this is not referring to CO2 “pollution”. ) Is that the “brightening” ion question?

      I suspect it is as much to do with the aerosols destroying ozone, leading it UV “brightening”.

      • That is one thing that sticks out like a sore thumb to me, after the Sulfuric Acid dissipated with the El-Chichon and Pinatubo eruptions, the stratosphere cooled to a lower level than before each eruption which would indicate that those two eruptions changed the radiative properties of the stratosphere and allowed more solar energy to reach the troposphere than before El-Chichon even to this day. NOAA research indicates that there were changes in Stratospheric water vapor content that occurred in the 80’s and 90’s, it would seem that this had something to do with the large SO2 injections in each decade, but I do not believe anyone has made this connection yet, but there sure is much circumstantial evidence that those eruptions have caused a NET warming effect. And I find it interesting that the phase change of high low temperature differences “Figure 1 in this article” that occurred about the same time as El-Chichon eruption is one more correlation that implicates these two eruptions as the main source of Global Warming.

        http://www.noaanews.noaa.gov/stories2010/20100128_watervapor.html

      • Thanks LT, Here is the last graph from the article I linked to, showing the complementary warming of SH SST Similar general pattern but with the rise damped by oceanic thermal inertial.

        The main reason for “the Pause” may be the lack of volcanoes.

      • Mike,

        Yes, it is as if the surface temperature adjusted to the new radiation budget a few years after Pinatubo and leveled off, because there has not been any significant volcanic activity since Pinatubo. The radiosonde data shows that Agung may have had a similar effect as well, but I do not think Agung had as much SO2 as the later eruptions.

    • Steven – I do not believe either that the step increase he mentions has anything to do with either DTR or the greenhouse effect. I date the step increase to 1999, immediately following the super El Nino, and nowhere near 1983. In only three years it raised global temperature by a third of a degree Celsius. It should be clear even to a true believer that no way can this be tied to the greenhouse effect. Its importance lies in the fact that it was actually the only warming during the entire satellite era. Temperature rise it created became permanent and ten years after it happened Hansen woke up to it and declared it to be greenhouse warming. Complete BS, that. But now I am interested in what you make of this DTR collection you listed in your comment. I count authorship from 16 groups. As you point out, “… apparent changes in DTR are much smaller than reported changes in average temperatures and therefore it is virtually certain that maximum and minimum temperatures have increased since 1950.” Why are you so sure the maximum and minimum have increased despite lack of data? Unless you have a good reason not to, Just let the DTR be what it is and stay with the average. I notice also that half a dozen times or so your references confine their observations to years 1950 or above. This parallels several statements from IPCC that human influence on climate becomes visible from 1950 on. Did someone pass the word there about avoiding earlier periods because they cannot be explained by the greenhouse effect? Here I am assuming that you know that the early century warming from 1910 to 1940 is not greenhouse warming and cannot be explained by the greenhouse theory of Arrhenius. According to much repeated global warming propaganda human contribution to warming began with the start of the industrial age, sometime in the 1850-s. If you are only going to claim anthroipogenic global warming from 1950 on you are abandoning the first one hundred years of your claimed anthropogenic global warming.

    • S.M; It may be active research, but nothing I see supports CO2 as the driving force, even more so when the troposphere data sets are added. As there is nothing wrong with the authors graphics, or interpretation, how are you adding to the discussion by clearly illustrating the lack of consensus?

      By the way the authors message regarding “anomalies” is well received, especially true when one considers the past, and therefore the baselines keep changing. Since the warming until the mid 1940s was 70 percent erased, as was the subsequent cooling ending in the ice age scare, the surface record is FUBAR, and becoming more so as time goes on.

      To further illustrate the authors cogent message regarding “anomalies” let us ask a simple question. What was the atmospheres (you know, the troposphere where the vast majority of CO2 is requires to have affect) warmest year? The two satellite data sets are the only tool we have for that, and, despite all S.M. wailing about their adjustments, those adjustments are based on strong physics AND observations of and calibration with weather balloons with the best instruments we have unaffected by UHI, station moves, TOBs etc. So what was the atmospheres warmest year?

      It turns out that 1998 was, and not by a little, but by a LARGE factor, at least ten times larger then the warmest year ever shouts of the alarmist.

      Whatever is happening with the questionable GMT surface, it is not CO2 related.
      The innocent and saintly increase in Mr. CO2 is only feeding about 18 percent of the planet, with no additional water or land required.

  2. On my screen (Firefox/windows) the summary table has the right data columns obscured by right hand column of WUWT twitter. The last fully visible column is 1956. Perhaps some formatting adjustments are needed? It is easy enough to see from the graph the whole point of the exercise, good analysis. Thanks.

    • I get the same with Firefox. However I have now opened it with Edge, the hottest browser on the block (maybe) and it hasn’t made any difference.

  3. “Something I find peculiar about modern climatology is the use of so-called temperature anomalies. While not unheard of in other disciplines, there are usually good reasons, such as for simplifying a Fourier analysis of a time series. ”

    The good reason that CRU and GISS use anomalies is that there methodology requires it to get a better estimate.

    Other approaches dont need to use anomalies.

    • really….one would think by using anomalies, instead of real temperature numbers, it would just make it easier to change the real temperature numbers….without anyone noticing

      Who would have guessed……

    • Steven Mosher: “The good reason that CRU and GISS use anomalies is that there methodology requires it to get a better estimate.”

      Ummmm…really…

      CRU and GISS eh?

      I would be interested to know by what criteria CRU and GISS determine that their methodology gives a “better” estimate, what their methodology gives a “better” estimate of, and what for purposes this “better” estimate is utilised.

      • There method is not better. they made up their methods and didnt test them

        but they still work ok and anomalies have NOTHING WHATSOEVER to do with the matter.

    • “The good reason that CRU and GISS use anomalies is that there [sic] methodology requires it to get a better estimate.”

      As do the GCMs.

      • “The good reason that CRU and GISS use anomalies is that there methodology requires it to get a better estimate”.

        Did Steven Mosher mean:-

        The good reason that CRU and GISS use anomalies is that their methodology requires it to get a better estimate

        or:-

        The good reason that CRU and GISS use anomalies is that there, methodology requires it to get a better estimate.

        The latter may actually be true!

      • Solomon Green writes “The good reason that CRU and GISS use anomalies is that their methodology requires it to get a better estimate”

        Yes. Principle Components Analysis, for instance but not exclusively, measures the sum of the squares of deviation from a baseline as part of its process. So you first calculate the baseline and then subtract that from every point; giving half the points above the baseline and half below the baseline (more or less). Do this for every candidate proxy. Now that everything is normalized, you can extract the correlated proxies. The complaint about the process is that if your baseline is calculated only on a portion of the data, the remaining portion might all be on one side or the other of the baseline producing a very high “signal” when the squares of that deviation are all added up.

        Now then you could put the baseline back in but it would be a nearly flat line obviously and as others have pointed out, the average Earth temperature, as an absolute, is essentially meaningless.

    • Interesting. Since Stephan-Boltzmann is an exponential equation it would seem that absolute values are required throughout for anything that deals with modeling temperatures.

  4. This looks like an interesting article. I’d like to read it. Sadly the auto-start, auto-scroll ad makes it too annoying. I’m typing this blindly as the ad continues to drag the text insertion box out of view.

  5. Is the BEST data adjusted to eliminate warming biases in the surface temperature readings, eg – UHI, airport rounding, migration of the thermometers, other siting issues?

      • Check for proxy, proxy virus. Using Internet Explorer, turn off “automatically detect proxy”.

    • So perhaps you smarter guys could suggest what would be an appropriate order of polynomial to fit. With proper justification for you choice. Any one can try to sound clever with vague refs to what they think “Willis” once wrote.

      • That question is answered with an ANOVA test. (Analysis Of Variance). The test determines which terms of a fit are significant, and which are not justified. You can use any level of significance you like. (Wee p-values, lookout). Overfit situations are actually worse than underfit situations. So you actually pay a penalty for overfitting your data.

    • @ Jit + Harold:
      You two are just being picky. The higher order fits are great for making predictions. Using the current NIST dataset, a fifth order fit produces a 7 degree rise by 2040, with a slope of almost 4 deg./decade. This puts the IPCC to shame with their paltry 3 deg/century “Global Warming”.
      And after the next La Nina, the forecast will plummet down to a “Snowball Earth”.
      Well, OK, maybe maybe these things are not good for making predictions, after all.
      But they are great for making *scary* predictions. And that is where the money is.

      No cookie for smoothing the data before the fit. Bad. Fitting does not require smoothing. Do not do that.

      The big trend in the data could be “Global Warming”, or it could be UHI contaminating the ground based dataset to the point of making it worthless. Not to mention the huge changes in the thermometer network in the later part of the data record.

    • 6th order seems reasonable. You have daily, monthly, seasonal/annual and multi-year phenomena. You have doubtless noticed some of the coefficients are small 0.000000000016(x^6)

    • Isn’t the choice of an even or odd order polynomial problematic? In simplistic terms an even order polynomial will be either M or W shaped. An odd order polynomial will be either N shaped or the opposite. Sorry in advance if I misunderstand the term “6th order fit”.

      • “Isn’t the choice of an even or odd order polynomial problematic?”

        It is problematic if you run beyond either end of the data. At larger values of X (around + or – 5000) the -(x^6) is going to dominate and at x=0 y is -1.1×10^9 (negative billion or so) giving you a steeply drooped upside-down U shape.

        http://www.statsdirect.com/help/default.htm#regression_and_correlation/polynomial.htm

        It has no predictive skill but can be used to reveal trend at a moment in time.

        I would like to see the entire parameter list in text so it can be pasted into Wolfram Alpha (or Maxima) to replicate the graph.

    • Matthew

      Three good reasons. Firstly, its data is reasonably transparent and recent and lastly Mosh generally turns up to answer questions or pose his own.

      tonyb

      • Matthew: It could be, if he can control his inner snark and post more than five words, he can make good contributions, as in this thread.

      • Matthew

        Mosh generally has two modes. Cryptic mosh and expansive mosh. If the latter turns up he generally has a good contribution to make. Whether we agree with it or not is another thing of course but at least expansive mosh lays out the data.

        Tonyb

      • It pretty EFFIN simple

        I post cryptically when I am in traffic or have two seconds to spare.
        I post expansively when I am waiting for my code to compile.

        There is so much wrong with this article that i don’t know where to start.

        he didnt even read the literature on Diurnal Range.

        Come on guys.

      • Hmm, in that case, Mosh, please do not post when you don’t have time to do it right. I have a section of climate books on my shelves and you are a co-author of the one titled “Climategate.”

        I like it when you look good.

      • ladylifegrows:

        Hmm, in that case, Mosh, please do not post when you don’t have time to do it right. I have a section of climate books on my shelves and you are a co-author of the one titled “Climategate.”

        I like it when you look good.

        Have you actually read that book? The quality of it is often barely above that of the comments from Mosher you dislike. I’d prefer cryptic comments to lengthy paragraphs with numerous factual errors, spelling and grammatical mistakes. And that’s without touching on the fact the book has a number of misquotations.

        Other than liking what he had to say in it, I can’t see why people would like that book much more than Mosher’s normal comments. He gets so many things wrong in that book I’d worry a person reading it would come away worse informed than they were before they read it.

        (If you have no idea what I’m talking about, you can see some of my discussion of these problems here.)

      • I’ve learned a bunch from Steve M. over time. We are fortunate indeed for all of the expertise he and other specialists bring to wuwt threads.

      • S.M. says…
        “It pretty EFFIN simple I post cryptically when I am in traffic or have two seconds to spare.
        I post expansively when I am waiting for my code to compile.
        There is so much wrong with this article that i don’t know where to start.
        he didnt even read the literature on Diurnal Range.
        Come on guys.
        ———————————————————————————————————————————–
        All S.M. post showed was that the literature cannot explain the diunal range via CO2. Nothing in it contradicted the authors message, or if it did, S.M. faled to articlate that. At any rate the childish method of criticism was not poductive.
        https://wattsupwiththat.com/2015/08/11/an-analysis-of-best-data-for-the-question-is-earth-warming-or-cooling/#comment-2006414

  6. An analysis of BEST data for the question: Is Earth Warming or Cooling?

    by Guest Blogger Clyde Spencer ~49 mins ago August 11, 2015

    I think the essential question is whether the total energy of the Earth and Atmosphere System (EAS) has increased or decrease or stayed the same within measurability ability.

    It is the energy change and not temperature change that is the physics question.

    John

    • that’s true, but misleading. In a gas parcel, one could measure pressure and volume changes to arrive at the thermal energy state of that parcel. But measuring Temp seems a far better minimize measurement errors an uncertainty values as a way to know how fast the molecules in that gas parcel are moving as an indication of thermal energy.
      Thus the practical answer to the physics question is Temperature.

      • As in PV=nRT?

        The energy content of the system can remain the same even as the Pressure, Volume and Temperature variables change. Hence the = sign.

        Good luck with measuring the Earth system. The premise is that n is constant?( what, with the ratios of the constituents of the atmosphere constantly changing? ) Ever changing diurnal, seasonal, and solar input sends perturbations cascading through the system. Accurate prediction?

        No problem, because Computers.

    • joelobryan on August 11, 2015 at 10:50 am said,

      “[. . .]

      Thus the practical answer to the physics question is Temperature.”

      joelobryan,

      If the total energy calculation/estimation of the EAS reasonably requires some temp data, so what?

      If the EAS total energy is not observed to have NOTICEABLY increased by any mechanism (including CO2) then . . . . phfffft!

      John

  7. Interesting article.

    One methodological point : if you are fitting a polynomial there is no reason to fit it to smoothed data, you should just fit it to unsmoothed. ( This was done as a second thought but should have been the original approach).

    Secondly, since everyone is so interested in rate of change, why is no one studying rate of change?? Fit a P5 to dT/dt instead of P6 to T(t). This would reduce the number of smart comments about elephants’ tails.

    Some discussion of how the order of polynomial to be fitted was chosen would be in order.

    Obviously, linear or cubic would result in an upward fit at the end of the data. Maybe P9 would show a final upturn too. It seems that the order matters in what is concluded and should be discussed.

    • Tell the truth, in spite of my earlier comments, the technique does have some merit. A high order fit can be considered as a kind of smoothing, and gives you a mathematical equation you can work with, as well.
      You are right, never smooth before fitting. Your fit is the smoothing.
      “Fit a P5 to dT/dt”
      You are right, would have been nice to see the derivative plots. Polynomials are great, even I can handle them.
      Y = a + bX + cX^2 + dX^3 + eX^4 +++
      dY = b + 2cX + 3dX^2 + 4eX^3 +++

      Grab the polynomial, take the derivative, and plot it up.

  8. I’ve always wondered about the use of the word anomaly myself – but admit to not being very knowledgeable about how it is used in various fields. Isn’t it basically a deviation from an expected result or trend? How can you even have an anomaly with respect to an average or a mean? An anomaly would be a lack of any deviations at all, not the deviations themselves, wouldn’t it?

    Hoping to be educated…

    • TC, in climate science it means the difference to some period average, usually some 30 years. There are legitimate reasons such as washing out the temperature lapse rate with altitude to look at regional change in mountainous terrain. Anoth exaple woild be washing out the effect of Lake Michigan between Wisconsin and Michigan given prevailing westerlies. There are also illegitimate reasons such as hiding the fact climate models vary in ‘real’ (absolute) temperature by over 3C, so cannot properly agree on key real temp processes like evaporation, ice formation…

      • “legitimate reasons such as washing out the temperature lapse rate with altitude to look at regional change in mountainous terrain”
        Ahh but are there any corrections for when cold air sinks leaving warm air at altitude (temperature inversion).

      • It also eliminates “heat” (noun) flows, which require a Temperature difference between places.

        g

    • Re use of anomalies in mineral exploration: In grassroots mining exploration, geochemical surveys are done in which mainly stream sediment samples and even water samples are taken at intervals covering every significant stream and tributary in the area of interest and these are analyzed for a suite of metals (mainly) that you believe may occur as hidden ore deposits. Because you are interested only in anomalous results, you calculate from a frequency distribution diagram of the data, an upper limit to be considered “background” copper, zinc, gold, etc. so you know what magnitude you should be getting excited about. Ranges of the anomalous data are given different colors on the map. ‘Anomalous’ tributaries of streams are then followed- up and sampled in detail to narrow down the area, etc. Then geophysical surveys are done on the ground (electromagnetic, magnetic, resistivity, radiometric, gravity, etc. – some of these can also be airborne). Once again geophysical ‘anomalies’ are identified. If they coincide with geochemical, we get excited.

      I find the color contouring of temperature anomalies on the globe to be familiar, although a geologist would choose more differential coloring schemes to make sense of the anomalies. Twenty shades of red used in climate science is a giveaway of the intention to give the globe a look of being on fire. The anomaly in climate science can also be very misleading, especially when you are showing a red hot spot in Antarctica that is still -20C.

      • Yes, that is the definition and usage I have always understood. My problem was with “anomalies over time”. For instance we all know that fashions in clothing change with time. I guess you could average out skirt lengths over the last 50 years and call the regular fluctuations “anomalies”, but by doing so I think you’d be introducing a judgemental element into the mix.

      • On the other hand, if you had an average of skirt lengths in different countries over time, then the “anomalies” in different regions might tell you something about local cultural differences. I can understand the adoption of the term in climate science, though I might have used a different one myself.

    • l have to agree.
      lts the changes in the weather patterns that are the best clues for cooling or warming.
      Try to understand what they were doing during the ice age and then see if these patterns are turning up more often or not.

  9. Furthermore, it has been claimed that warming in the Arctic is at least twice the rate of the rest of the globe (

    My reply – which is evidence against GHG effects for the temperature increase last century.

    GHG effects would effect the temperatures in the lower latitudes more.

      • ?
        I thought CO2 theory postulates that the polar regions would be affected more due to a dryer atmosphere?

    • Well I don’t thin it is evidence of anything.

      It’s simply common sense.

      Consider a TOA TSI of say 1366 W/m^2

      For all practical purposes, the circle in earth’s orbit, that is the same diameter of the earth, is uniformly illuminated by the sun.

      For a presumed spherical earth, and a clear sky (simplification) the spherical surface irradiance varies as cosine of the zenith angle.

      This results in the polar regions having an irradiance that is a fixed fraction of the TSI, for any zenith angle..

      So if TSI varies 10%, a given polar location at some non zero zenith angle also will see a 10% change in solar irradiance.

      BUT ! the polar regions are much colder than the zenith point, and therefore their rate of EM radiation assuming near BB conditions, is much lower The total radiant emittance varies as T^4. so the radiant emittance variation is quite non linear, and the poles radiate very little.

      A fixed percentage change in their LSI (local solar insolation) will change the polar temperature much more than the same percentage changes the tropical temperature.

      On top of that, a lot of the energy that lands in the tropics is pumped to the polar regions by the ocean and atmospheric currents, and that gives the polar regions an even more difficult task of trying to cool.

      Try imagining that the poles were at 1 K, and the equator was at 300 K, and their solar input powers changed by x%. The poles would heat up much faster for a 1% increase in TSI than the tropics would,

      Of course clouds and other factors complicate the problem; but it still doesn’t change the fact that it takes a lot more energy to change the temperature of a hot body than to change the temperature of a cold body by the same delta temperature (anomaly).

      g

  10. The following is from:http://www.cru.uea.ac.uk/cru/data/temperature/#faq5

    Which helps in understanding why global temperature averages should not be used…..

    “Why are the temperatures expressed as anomalies from 1961-90?

    Stations on land are at different elevations, and different countries measure average monthly temperatures using different methods and formulae. To avoid biases that could result from these problems, monthly average temperatures are reduced to anomalies from the period with best coverage (1961-90). For stations to be used, an estimate of the base period average must be calculated. Because many stations do not have complete records for the 1961-90 period several methods have been developed to estimate 1961-90 averages from neighbouring records or using other sources of data (see more discussion on this and other points in Jones et al., 2012). Over the oceans, where observations are generally made from mobile platforms, it is impossible to assemble long series of actual temperatures for fixed points. However it is possible to interpolate historical data to create spatially complete reference climatologies (averages for 1961-90) so that individual observations can be compared with a local normal for the given day of the year (more discussion in Kennedy et al., 2011).”

  11. The author may not have fully appreciated Karl’s sleight of hand in adjusting buoy temperatures upwards. The fact that the proportion of data which comes from buoys rather than ship engine intakes increases over time creates a positive slope out of just this upward adjustment. This is even though the individual series are pretty well flat over the past 15 odd years. Simple, but effective.

    • How can you adjust the Argo floats since 2003 based on ship intakes?

      The Argo floats should be more numerous and accurate.

      Any alteration to the post 2003 ocean temperatures smacks of dishonesty.

      You can use one trend (intakes), you can use the Argos trend, you can use the average of the trends. Using intakes to adjust Argos is just crazy. The two series have about the same trend but the intakes have more variability and in theory should be less complete and accurate.

      • PA: “Using intakes to adjust Argos is just crazy.”

        I think the word you’re looking for is “desperate”.

  12. The remarks about DTR intrigue. In ordinary daily US midwest experience temperatures run from overnight lows to afternoon highs in a range roughly ten times the range of decade-to-decade change in global annual average. That is, within 24 hours immediate temperatures vary from 60 to 100 degrees but over 24 years the gridded averaged infilled homogenized global temperatures vary from something like 57 to maybe as high as 61 degrees.

  13. Dear Mr. Spencer,

    I hope you get to reading this.

    The assumption by the CRU.UEA that all SSTs. taken after C1940 were from ER intakes is quite wrong.

    I spent 17 years as a Met Office VO doing the weather reports the CRU relies upon and I can assure you they were 99.9% taken with canvas buckets or the specialised bucket with a thermometer inside supplied by the Met Office.

    Recently, I conducted a survey on a MN web site and 100% of the replies confirmed this to be the usual case.

    If you wish to contac tme further , please indicate.

    Anybody reading this who can contact Mr. Spencer , please pass this on.

  14. “Another issue is that the authors are free to choose whatever base period they want, which may not be the same as others, and it makes it difficult to compare similar analyses. The psychological impression conveyed is that (recent) data points above the baseline are extraordinary. Lastly, the use of anomalies tends to influence the subjective impression of the magnitude of changes because very small changes are scaled over the full vertical range of the graph. See Figure 2 below, which shows actual temperatures, for a comparison to the anomalies that you are used to seeing in the literature.”

    1. You are not FREE to choose any base period you want if you are doing CRU or GISS type methodologies. The anomaly period is selected generally speaking to MAXIMIZE the number of reporting stations . Since both use GHCN and since that data falls off after 1990 or so, selecting a period
    after 1990 will create some noise issues. So GISS looks at roughly the period where global stations are at a max and CRO picks a period in which SH stations are near their max.
    If you use a different methodology then you can avoid anomalies all together or pick different periods.
    This is a NON ISSUE in the large picture.

    2. I’ve never had the psychological impression that the anomalies are extraordinary. ESP fail.

    3. You can scale temperatures just as “deceptively” but only the in numerate is “fooled” We care more about trend which doesnt change if you change scales. being a chart nanny is not very becoming.

    • 2. I’ve never had the psychological impression that the anomalies are extraordinary. ESP fail.

      Has nothing to do with psychology or ESP, has everything to do with physics. Anomalies are of use when they vary linearly with the metric of interest. That is not true in the climate debate. The metric of interest is the effect of doubling of CO2 having been computed at 3.7 w/m2 and what that effect that has on global temperatures.

      The problem being that neither temperature nor anomalies derived from temperature vary linearly with w/m2. Stephan-Boltzmann Law is expressed as:

      P=5.67*10^-8*T^4
      P in w/m2
      T = degrees K

      Given that P varies with the 4th power of T, averaging an anomaly of 1 degree from a temperature (in say the arctic) of -40 C with one from an equatorial desert with an anomaly of 1 degree from a temperature of +40 C is meaningless. The determination of the temperature data sets such as BEST and the modeling community itself to simply ignore this fundamental and widely known aspect of the physics is as disturbing as is the exaggeration of the results themselves.

      You’ve all got great big computer systems. Would it really be all that hard to raise the temperature data to the 4th power and then average it? The result would then have some use, unlike either averaged temperatures or averaged anomalies derived from temperatures.

      • His CLAIM :”The psychological impression conveyed is that (recent) data points above the baseline are extraordinary.”

        My rebuttal : ” I’ve never had the psychological impression that the anomalies are extraordinary. ESP fail.”

        you have ZERO clue why anomalies are used.

        Hint. we dont use them

  15. “I take exception to the practice of conflating Sea Surface Temperatures (SST) with land air temperatures. There are several issues with this practice. While a weak excuse is, there is a strong correlation between SST and nighttime air temperatures, that is hardly justified with modern instrumentation. ”

    1. There is no “issue” with the practice. Everyone who works with the data understands that it
    is an INDEX and not a temperature.
    2. The Historical reasons for doing it IS NOT the correlation. MAT data ( Marine Air Temperature) is
    a nightmare. SST is only a bad dream. The only way to solve the MAT problem would be to use
    nighttime MAT, but then you are tossing out half your data and you’d be combining SAT (min and max)
    with MAT nighttime ! so you’d have a similar problem.
    3. Modern instruments DO NOT SOLVE the historical problem. The problem is ‘solved’ by making tough choices and explaining them.

    4. We use “indices” in many areas, especially those where the trend is important.

    I encourage folks to look at all the data in isolation first. Start with ICOADS

    • So a difficult choice leads to larger error bars. The satellite records stand alone. 1998 was, far and away, the warmest year of the modern record. Whatever is happening to the surface GMT, it is not due to CO2.

      ——————————————–
      SM says… “Everyone who works with the data understands that it
      is an INDEX and not a temperature.”
      ==================================
      Really, so all those reporters quoting scientist about the warmest year ever understand this?

  16. “However, we don’t average subsurface ground-temperatures with land air-temperatures! Why should we average SSTs with land air-temperatures? It is a classic example of comparing apples and oranges. SSTs are of interest and provide climate insights, but they should not be averaged with air temperatures!

    1. Why should we average the two? if you gave me a blank sheet of paper I wouldnt. Historically hansen and jones wanted a “global” answer and the processing of SST was easier than MAT. They called it an index. So for historical reasons people continue to compute this. AT BE we considered doing a MAT and SAT metric. The historical precedent weighed heavily against that decision.
    That is people will ask “How does MAT &SAT compare to SST and SAT.. well they are not very different
    so doing MAT & SAT isnt very interesting. So nobody is telling you NOT to use MAT and SAT, you
    are free to do so and get published.

    2. The insight is limited but it exists. The index is increasing. Interest people will then dive
    down into details and look at all the components.

    • So for historical reasons people continue to compute this.
      The historical precedent weighed heavily…>/i>

      Is it more important to be consistent with historical precedent? Or to use the most informative approach?

      I’m not feeling well. After consulting with historical precedent, I’m off to get myself a blood letting.

      • False choice.

        to compare to other indexes something you will always be asked to do, you construct an index.

        So its not do one or the other. Like I said do BOTH.

        and you will see that there is nothing much to write home about

        Its getting warmer whether you

        a) just look at SAT
        b) just look at SST
        c) Just look at MAT
        d ) look at any combination

    • SM, yet your post completely misses the authors reasons for not using SST, which by the way I do not entirely agree with.

      His point is that SST is just as likely to be a function of stirring the pot of water in his illustration, and the long residence time of energy emanating from the oceans massive thermal capacity can have an affect not relevant to CO2 warming. In this the few hundredth degree change here or there is meaningless in an otherwise flat trend. (He is correct about this.)

      Indeed, the current very high SST is, in conjunction with the satellites, evidence against CAGW.

  17. The error of the derivative of a high order polynomial fit is huge, and the result for end slope is a matter of chance. I did the same analyses using polynomials of order 4,5,6. The end slopes for T_max were 0.070, 0.049 and -0.012 °C/year, respectively.

    The results are meaningless.

    • Thanks for doing this exercise. It does not surprise me that the end slopes are different depending on the order of the polynomial. 40 years ago in my Applied Mathematics 2 lectures I was told to be careful when using polynomial fits but I can’t remember why. Your checks now seem to explain why.

    • Maybe your results look meaningless because 4th and 5th degree polynomials aren’t adequate. What sort of correlations did you get for those? But yes, it’s true…endpoint treatment raises interesting issues. We’ve seen that before, such as here https://wattsupwiththat.com/2013/03/30/dr-michael-mann-smooth-operator/.

      Ditto for errors that are so huge that they render the results meaningless…not that they are necessarily treated as such, however (e.g., the “Hockey Stick”).

      • I used unsmoothed data, so my R2’s are lower. For orders 4,5,6 I get 0.462, 0.466 and 0.479, calculated as fraction of explained variance (Tmax).

        But even with order 6, the end (2014) slope jumps around with the start point of data. Starting 1850, I get .010. This goes to -0.01 if I start 1860, -0.012 for 1870, and -0.060 °C if I start 1890. There’s no consistency.

      • Smoothing vs unsmoothed seems likely to alter the results. As I mentioned, the treatment of endpoints has been an issue before.

        I have little doubt that a 6th degree polynomial is just an exercise in curve-(over)fitting. But I think plenty of published reconstructions of past temperatures have issues with endpoints, start-dates (the beginning of curve-fitting here, the beginning of the calibration period in reconstructions), etc, as well. It’s easy and obvious to take shots here…less obvious when you’ve got a Mann-o-matic that has turned everything into a fine mess. http://climateaudit.org/2007/06/09/mannomatic-smoothing-and-pinned-end-points/

      • Polynomial fits can be very good in the range of the fit, but they are absolutely useless beyond it. The derivative will be useless close to the end. Trignometric functions are approximated by a Taylor series. The more terms, the better the fit. But computer functions don’t use the Taylor series, because it converges too slowly. They use hyper-optimised polynomials that work rapidly and accurately within a certain range. I once tested one of these algorithms outside its range. Immediately beyond the final point within the range, the result careened wildly into vastly wrong values. I suspect that a fit to existing data will do much the same.

      • If I’m not mistaken (have been in the past) there is an inherent problem in doing polynomial fits, unless the actual physics is of that form.

        The problem arises if the signal (data stream) contains any cyclic behavior.

        The simplest cyclic behavior would be of the form: y = Asin(x) +Bcos(x)

        If you put this in the form of a 6th order polynomial, I suspect that you create frequency terms up to (6x).

        This signal then has a bandwidth of at least (6x) so you have to sample the function at six times the cyclic frequency, in order to not generate aliasing noise. (actual sample rate would be 12x minimum.

        I prefer to use only the actual data samples themselves. They contain all of the information you will ever have.

        g

  18. As regards the Karl paper.

    He uses two data sets — 1) from ships (data points decreasing over time) — 2) from buoys (data points increasing over time). Individually neither set shows a slope.

    An adjustment is made to one data set to supposedly enable the combining of the two sets. When combined an upward slope suddenly appears! WOW!!!

    A logical mind would suspect that it was the adjustment that created the slope.

    You don’t have to do the math, this can be mechanically demonstrated. Just repeat Karl’s work exactly except for changing the size of the adjustment up or down. The slope of the combined data will vary with the change in the size of the adjustment.

    As in so many things in “climate science” the increase in temperature is due to the adjustment used.

    Eugene WR Gallun

  19. There is an enormous amount of claims in this article but your summary only have to main points. Without relating to the first point, the second is:
    “the analysis suggests we are currently in a cooling phase, not just a plateau”
    I cannot see that your analysis suggest that?

    And if it is based on trends alone, whether by first order or 6´th order curves I would like to object that trends can be deceiving. What is the mechanism behind the trend? What is your hypothesis? Which predictions have you deducted from your hypothesis.? Which tests has your hypothesis been exposed to? May I remind you about the scientific method:

    1 A hypothesis is proposed. This is not justified and is tentative.
    2 Testable predictions are deduced from the hypothesis and previously accepted statements.
    3 We observe whether the predictions are true.
    4 If the predictions are false, we conclude the theory is false.
    5 If the predictions are true, that doesn’t show the theory is true, or even probably true. All we can say is that the theory has so far passed the tests of it.
    (Courtesy to Patric Maher)

    I cannot immediately see that your summary is supported by the information you provide.

  20. I’m a proud denier (that the world will end, it will not, climate always changes and people will adapt (some easier than others)) based on all the fiddling with data that is occurring, but I’d like to see the sample size for each year for this data set. Really how many accurate temperature sources (even calibrated) were there in the 1860s? Also you are using two and three significant digits. Were the stations recording temps in the late 1800s able to record to 1/1000s of C accurately? Based on the below, the answer is no.

    “The 20th century also saw the refinement of the temperature scale. Temperatures can now be measured to within about 0.001°C over a wide range, although it is not a simple task.”

    http://www.capgo.com/Resources/InterestStories/TempHistory/TempHistory.html

  21. The science and math is interesting to a great many. I enjoy reading the opinion. But in the end, we will adapt or die.

    Seems to be like counting angels on the head of a pin.

    But thanks to everyone for the philosophy.

  22. There is an inherent assumption made in the land temperature data (GISS, CRUTEM4, BEST). The assumption is that any global warming signal can be quantified by first subtracting out seasonal normal temperatures individually for each station. Weather stations are located at very different altitudes ranging from sea level to 4000m above sea level. Consequently absolute temperarures are wildly different, whereas anomalies are assumed to be similar. Is this allways true? Well no it probably isn’t since for example the temperature response to the same forcing is temperaure dependent.

    DT1 = T2^3/T1^3 DT1

    This is one reason why the arctic warms faster than the tropics.

    In order to make a global (anomaly) average, stations are divided according to a geographic grid – typically 5×5 degrees (CRUTEM4). Even within a single grid cell stations can differ in altitude by 2000m. Yet their anomalies are simply averaged together. Likewise the global average is a simpare weighted averge of the grid anomalies.

    The normalisation period (eg 1961-1990) pivots together the zero values for all stations. This assumes that all stations warm and cool in unison like synchronised swimming. Is this true ? Do some locations warm earlier than others. Just one reason why they might is the UHI.

    Most people simply assume that UHI makes cities warmer than the surrounding countryside. This is true but once a city has developed the temperature anomaly remains the same as the countryside. It is only during a period of rapid growth that the anomaly changes. The choice of normalisation period necessarily makes most cities appear cooler in the past. The overall UHI effect in CRUTEM4 is mostly to cool the past. The BEST result showed that the UHI pparently had no effect on global warming. This is only true after 1940. Many of the early stations were situated in towns which saw rapid urban growth during the early 20th century.

    A good example of this effect is Sao Paolo which grew from a small village into one of the world’s largest cities in under 100y. By 1961 it was already ‘warm’. The normalisation subtracts this warmth off the anomaly thereby making Sao Paolo appear colder than the surroundings in the early 20th century. Sao Paolo doesn’t measure global warming at all. It measures the effect of rapid urbanisation. There are many other examples – Shanghai, Beijing, Moscow etc.

    The red curves are NCDC after their automatic corrections. Most of these corrections due to sighting changes and instrument changes have been generated automatically by detecting temperature ‘shifts’. In general these also have the effect to cool the past which anyway have the largest uncertainties. I estimate that 0.2-0.3C of the observed warming from 1850-1940 is probably spurious and unrelated to CO2.

    • “There is an inherent assumption made in the land temperature data (GISS, CRUTEM4, BEST). The assumption is that any global warming signal can be quantified by first subtracting out seasonal normal temperatures individually for each station.”

      Except we dont do that. We dont work with anomalies,

      We dont average temperatures.

      We dont subtract out seasonal norms.

      We actually did what many skeptics here and and CA suggested.

      do figure

      on UHI here is the clue.

      Use only rural stations. you get the same answer

      here is the next clue. the land is 30% of the total.

      • Steve,

        Of course you work with anomalies. It’s just that you derive them in a different (more sophisticated?) way. You minimise the weighted sum of squares for each station offset to nearby stations for those years that the station is present rminus a global ‘anomaly’. The fit will move the global anomaly result to be relative to a time period which has statistically the most number of stations. The station anomalies are derived relative to the mean regional temperature and the global anomaly relative to the time period which has the maximum coverage and stations. This period is always somewhere between 1951-1990. That is why the BEST results can be compared directly to Hadcrut4 and GISS offset a bit. That is why they agree so well with each other. Most recent stations overlap and are based on GHCN V3.

        Of course you average temperatures – that is exactly what your fit ends up doing with the regional mean.

        You also subtract out seasonal norms because the fitted regional average is essentially subtracted out each month. The yearly average then of course removes any remaining monthly ‘deltas’.

        Regarding UHI. I may be wrong but I think you used MODIS to determine rural stations and then got no change in the fitted global anomaly. Well you could also do a similar exercise and drop all stations which were at least 500m higher in altitude then surrounding stations. You would also get no change. What matters is not the altitude or the size of a city but the rate of urbanisation. Only this can changes the global (land) temperature anomaly. Most urbanisation occured before 1990.

        Yes it is true that land is only 30% of the surface so that in that sense the UHI is small on the global level. However the majority of the population live near cities.

    • I agree with Clive Best that at least 0.2C to 0.3C of the warming trend is caused by spurious warming adjustments.

      But that was before Tom Karl’s latest manipulation which added 0.12C to the sea surface temperatures (or about 0.1C to the global average including land).

      So, now the spurious/unjustified adjustments are 0.3C to 0.4C and this number is increasing every day that the NCDC is running their adjustment algorithms behind closed doors with Tom Karl leaning over the shoulder over some analyst running the code and individual station fudges.

      Best and Mosher just take the station records around the world and remove ALL cooling periods thereby creating an artificial set of numbers that cannot be described as a temperature record but is more of a straight line going up simulation. 20 warming adjustments/station breaks for every 1 cooling adjustment/station break.

      • So, now the spurious/unjustified adjustments are 0.3C to 0.4C a

        Back up your assertions with data. You are lacking units and time periods as well as code and data sources.

        Here’s a result of the changes in trend (degC/decade) from every GISS release I and others have been able to obtain back to 2005. What’s shown is degC/decade from 1880 to 2005.

        So it’s 0.62 degC rise for 1880-2005 for the 2005 release, and 0.75degC rise since 1880-2005 for the 2015 release. A change of 0.13degC for that 12.5 decades. Which is far less than your number. But still significant.

        If you look at the change since 1950, the supposed* start of the AC02 signal, it looks like this:

        So that’s a 0.55degC rise in the 1950-2005 time period on the 2005 release and 0.71 degC rise in the 1950-2005 time period in the 2015.6 release. That’s 0.16degC degree difference. If you are predicting trends however it’s a pretty significant increase of 0.3degC/century.

        So yes they are manipulating the data. But the changes from 2005 to 2015, which I can find original data for, is less than what you are saying, though I’m just guessing as you didn’t provide units or time periods.

        I keep looking for the source of the hockey stick data manipulation trends but nobody will provide it in a consumable manner. In the age of Hyperlinks that’s just silly.. Please provide a link if you have it.

        Peter

        Source: https://www.dropbox.com/sh/qi9h70otb2p9j9h/AABPE2Uf-s8xe8iGGr1BhQULa?dl=0

        * Hard to find a definitive source for the start of signal. Then again how much signal is A has no definitive source either, it’s likely not accurately measurable…

      • Peter,

        Here is the data which shows how the various corrections made to GHCN stations over the years have moved temperature anomalies to show ever steeper warming. The green points are GHCN V1 from 1990. The blue points are GHCN V3U ( raw values). The red points are GHCN V3C (corrected values). All station data from each set has been processed in exactly the same way using a normalisation period 1961-1990. GISS used to apply it’s own corrections but now simply use V3C corrections.

        As you can clearly see, the effect of corrections has been to increase net warming since the 19th century by at least 0.3C.

      • Here is the data which shows how the various corrections made to GHCN stations over the years have moved temperature anomalies to show ever steeper warming. The green points are GHCN V1 from 1990

        I see a graph. where’s the data? Wheres the source code for generating the graph?

      • Data is here https://www.ncdc.noaa.gov/ghcnm/v3.php

        Source code is in PERL and is a modified version of the CRU station analysis. In the case of V1/V3 it first calculates the monthly averages between 1961-1990 for each station. Then it uses these to calculate the monthly ‘anomalies’ since 1850 within a 5×5 grid. Finally it makes a global weighted average and a yearly global average. If you want it I can send it to you. The same software appield to the 6000 stations used by CRU exactly reproduces CRUTEM4.

  23. “Is Earth Warming or Cooling?”

    You certainly won’t find out by averaging temperatures. No physical basis, junk science.

  24. I’ve mentioned Casino and technical stock market analysis before, but it’s a good time to bring it up again. The issue with all this trend obsession is that no one trying to asses the trends has any reason to expect that minor variances have any meaning whatsoever. Like a roulette player looking back over past results of red or black, all your trend lines only have significance in the past. They mean nothing, they tell you nothing. The more complex the analysis the bigger the bs artist conducting it. It’s just another fraud. Just another way to artificially create an “expert” who hopes to get financial or sociological reward for his efforts.
    There is no Grednhouse Effect. This has been empirically and definitively shown many times. So anyone speculating over “trends” is just wasting people’s time with trivia or light entertainment.

  25. What Karl et al. (2015) showed, beyond a shadow of a doubt, is that “climate scientists” can torture the data to make it say almost anything.

  26. Even in the English-speaking world, where max/min thermometry is most prevalent, monthly average max/min data are rarely obtained by identical algorithms. Thus the global average diurnal range based upon a hodge-podge of non-uniform records, such as shown in Figure 1, needs to interpreted with great caution. What has been very widely ignored in “climate science” is that urban growth usually narrows the diurnal range by increasing night-time temperatures much more strongly (especially in winter) than any credible attribution to GHGs, while the effect upon daytime temperatures (especially in summer) is much weaker.

  27. I would imagine that anyone who has written multiple regression software and used it in a commercial or scientific environment will be aware of the dangers of extrapolation. Any extrapolation is presumably intended for some sort of forecasting purpose. The very sound conventional advice if your model is a polynomial is “don’t extrapolate”. The higher order the polynomial the more dangerous it is to use the regression for future projections. 6th order is virtually unheard of, however wonderful the fit over the data range. It happens that with the TMax data the most recent observations which end at the end of 2014, the short term slope is very slight, so for a few years little harm will come. But after a few more, a precipice-like decrease in the projections occurs.

    It is interesting to examine the residuals from the 6th order fit. Superficially “normal” in appearance, calculation of their skewness and kurtosis reveals that although they are very symmetrical, they are very much more peaked than the expected or hoped for normal distribution, k being 3.06. Something is odd about this fit. Again I think a warning about high order polynomials.

    There is an alternative way of looking at “noisy” time series, which does not rely on any sort of preliminary smoothing and which uses all the available data, and does not impose a model on the series. It lets the data speak for themselves. It is useful in the identification of possible discontinuities, which helps in the development of piecewise fits to the data, normally using linear models over restricted ranges. These could be second order if the primary analysis indicates that a steadily increasing or decreasing trend in the original observations may be present. My researches seem to suggest that much of climate change occurs because of very rapid or step changes, and indeed some can be found in the TMax series, for example at 1937.

Comments are closed.