Does NASA’s Latest Study Confirm Global Warming?

Some heated claims were made in a recently published scientific paper, “Recent Global Warming as Confirmed by AIRS,” authored by Susskind et al. One of the co-authors is NASA’s Dr. Gavin Schmidt, keeper of the world’s most widely used dataset on global warming: NASA GISTEMP

Press coverage for the paper was strong. ScienceDaily said that the study “verified global warming trends.” U.S. News and World Report’s headline read, “NASA Study Confirms Global Warming Trends.” A Washington Post headline read, “Satellite confirms key NASA temperature data: The planet is warming — and fast,” with the author of the article adding, “New evidence suggests one of the most important climate change data sets is getting the right answer.”

The new paper uses the AIRS remote sensing instrument on NASA’s Aqua satellite. The study describes a 15-year dataset of global surface temperatures from that satellite sensor. The temperature trend value derived from that data is +0.24 degrees Centigrade per decade, coming out on top as the warmest of climate analyses.

Oddly, the study didn’t compare two other long-standing satellite datasets from the Remote Sensing Systems (RSS) and the University of Alabama at Huntsville (UAH). That’s an indication of the personal bias of co-author Schmidt, who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset. In fact, Schmidt’s bias was so strong that when invited to appear on national television to discuss warming trends, in a fit of spite, he refused to appear at the same time as the co-author of the UAH dataset, Dr. Roy Spencer.

A breakdown of several climate datasets, appearing below in degrees centigrade per decade, indicates there are significant discrepancies in estimated climate trends:

  • AIRS: +0.24 (from the 2019 Susskind et al. study)
  • GISTEMP: +0.22
  • ECMWF: +0.20
  • RSS LT: +0.20
  • Cowtan & Way: +0.19
  • UAH LT: +0.18
  • HadCRUT4: +0.17

Which climate dataset is the right one? Interestingly, the HadCRUT4 dataset, which is managed by a team in the United Kingdom, uses most of the same data GISTEMP uses from the National Oceanic and Atmospheric Administration’s Global Historical Climate Network. Among the major datasets, HadCRUT4 shows the lowest temperature increase, one that’s nearly identical to UAH.

Critics of NASA’s GISTEMP have long said its higher temperature trend is due to scientists applying their own “special sauce” at the NASA Goddard Institute for Space Studies (GISS), where Schmidt is head of the climate division. But what is even more suspect is the fact that while this is the first time Schmidt has dared to compare his overheated GISTEMP dataset to a satellite dataset, he chose the AIRS data, which has only 15 years’ worth of data, whereas RSS and UAH have 30 years of data. Furthermore, Schmidt’s use of a 15-year dataset conflicts with the standard practices of the World Meteorological Organization, which states “as the statistical description in terms of the mean and variability of relevant quantities over a period of time… The classical period is 30 years…”

Why would Schmidt, who bills himself as a professional climatologist, break with the standard 30-year period? It appears he did it because he knew he could get an answer he liked, one that’s close to his own dataset, thus “confirming” it.

The 15-year period in this new study is too short to say much of anything of value about global warming trends, especially since there was a record-setting warm El Niño near the end of that period in 2015 and 2016. The El Niño event in the Pacific allowed warm water heated by the Sun to collect, dispersing heat into the atmosphere and thus warming the planet. Greenhouse gas induced “climate change” had nothing to do with it; it was a natural heating process that has been going on for millennia.

Figure 1: At left, Panel A NOAA sea surface temperature data showing peaking of the 2015/2016 El Niño event in the equatorial Pacific Ocean. Panel B is Figure 1 from Susskind et al. 2019 with annotations added to illustrate correlation with the peak of the 2015-16 El Niño event in AIRS data.

As you can see in Figure 1 above, there has been rapid cooling from that El Niño-induced peak in 2016, and the global temperature is now approaching what it was before the event. Had there not been an El Niño event in 2015 and 2016, creating a spike in global temperature, it is likely Schmidt wouldn’t get a “confirming” answer for a 15-year temperature trend. As you can see in the figure above on Panel B, the peak occurred in early 2016, and the data trend before that was essentially flat.

It appears that the authors of the Susskind et al. paper were motivated by timing and opportunity. It was crafted to advance an agenda, not climate science.


Anthony Watts is a senior fellow for environment and climate at The Heartland Institute.

Advertisements

140 thoughts on “Does NASA’s Latest Study Confirm Global Warming?

  1. It’s my understanding that the IPCC acknowledges El Niño and La Niña as weather events, unrelated to climate.

    I stand to be corrected of course by knowledgeable contributors as I’m not a scientist.

    • HotScot,
      You are correct. The Bureau of Meteorology in Australia acknowledges El Niño as part of ENSO, El Niño Southern Oscillation, to be a naturally occurring climate phenomenon.
      The claim by mainstream climate scientists is that global warming “exacerbates” ENSO.

    • If weather events are unrelated to climate, could someone give us the new IPCC definition of ‘climate’, please?

      It used to be that weather and climate were intrinsically linked, climate being an average of weather.

      • You sure won’t be detecting any genuine climate-changing trends for periods less than those within the sedimentary palaeo-climate records.

        Sorry Mr. Gavin Schmidt, the planet does not have a fast-forwards climate change CO2-button just because we have multispectral satellites overhead, and you want a slightly bigger citation list.

        The planet’s climate-change processes do not operate at the piffling insignificant timescales of your minor career and life span. Deal with it as simply corruptly relying on biased ‘adjusted’ past surface records to try and connect to a case in satellite that you’re ‘right’, is wearing really thin with anyone familiar with your paper touting and cli-sci trolling.

        Satellites CAN NOT detect an actual planetary climate-change trend, they can only detect overprinting inconsistent weather cycle changes, which for the recent decades is slow warming (same as in earlier centuries actually) or rather, nearly static for two decades (no change) after the hysteria of ENSO cycles is filtered from the flat-lining ‘trend’.

        Whoopdee-do!

        Almost but not quite nothing, and just as the actual palaeoclimate data trend (i.e the actual climate record) indicates is ‘situation normal’ for this planet. There’s no CO2 fast-forwards changingness button, so quit trying to pretend we’re in fast-forward ‘change’. Satellites see overprinting weather-cycle noise only even over 30-year observational records. Because the time scale of real climate-change is a minimum of about 10 to 20 times longer than the Satellite data records exist for.

        That should be a bit of a hint to anyone, including yourself that you’re a crank and a time and resource waster barking up the wrong tree.

        The modern surface warming period is mostly a product of urban-heat-island data accumulation corruption, and human advocate’s data ‘adjustment’ corruption, and the resulting (and also corrupt) faked interpolations. While the satellite data is simply showing the on-going re-emerging from a “Little Ice Age” (LIA), with overprinting oceanic circulation cycle phases, weather variability, and ENSO overprinting this on-going LIA amelioration. In fact these factors alone are enough to explain the observed slight net modern warming (without even invoking the Sun or geomagnetism as another potential control on weather or even climate time-scale variability).

        Again, it’s a big NET whoopdee-do!

        • ”the time scale of real climate-change is a minimum of about 10 to 20 times longer than the Satellite data records exist for.”

          Well put and I agree 100%! Like ants contemplating Mt Everest.

          • Not to mention the ludicrous “accuracy” to a few hundreths of a degree. 7/100 ths between the fastest and slowest temperature increase. Angels and pins.

          • WXcycles and Mike wrote: ”the time scale of real climate-change is a minimum of about 10 to 20 times longer than the Satellite data records exist for.”

            Great! “Real climate-change” can’t be detected for 400-800 years. A wonderful way to make a problem disappear. Define it as something that can’t the detected for 20-40 generations. That way, you won’t need to deal with it.

            Boeing can’t be sure its MACS units caused the Indonesian Air crash. At least 100,000 MAX flights have taken off and landed safely.

            Many cars catch fire in accidents, including some Ford Pintos. That didn’t mean that the Ford engineer was right when he warned that a re-filling gas tank would be more hazardous.

    • It’s flat all the way up to the El Nino….the the usual slow come down

      ….take out the El Nino and it’s flat again..and that blows their trend line

    • I just wish they’d make up their minds about it. I think all the talk about it – all the argumentation – is producing large volumes of hot air, throwing off the weather cycle and sending us back into the cold.

      The middle of April, we had a snowstorm of many inches, and more shoveling. That finally melted and turned my tiny yard into a bog. Then the end of April, another snowstorm of about three inches, which finally melted and sank into the bog. It’s May and should be in the upper 60s, but instead, it’s in the 40s with cold, cold rain that would like to turn to snow if it had the chance. This is confusing all the birds, never mind me.

      I don’t want to shovel any more of that white stuff. (Yes, I have photos of it, always.)

    • Unrelated to climate, yes. But extremely handy if you need to show a warming trend to prove your theory.

    • There are a number of reasons why I am on the sceptical side of most of these arguments, HotScot, but I have never understood why warming provoked in the short term by El Nino events can be taken out of the equation when AGW is being discussed.

      The position that says, No – it’s an El Nino, not AGW, is at the least questionable, and at the worst, nonsense. The best that could be argued is that Ninos are returning to the atmosphere heat that was sequestrated centuries, or millenia, before, and therefore separated from the current warming we discuss. If we aren’t claiming this as ‘ancient’ heat, then we still have to treat it as part of the current atmospheric processes which are the subject of the debate.

  2. The world is ending in less than 12 years now, so we will only see 0.29C warming by then and this is not enough to make any difference. So either they are wrong and the world is not ending, or it is and this study is worthless.

    I really wish they would make up their minds.

    I wonder what they do if the world actually cools down a bit?

    • I wonder what they do if the world actually cools down a bit?

      They would say that it was expected that there would be cooling, models had already shown that the Earth would cool and it was due to human emissions of CO2. The Earth will enter an ice age in 12 years unless people pay more tax.

      • Remember: the solution touted for the New Ice Age in the 70s was… more government power and less fossil fuels.

        It’s nothing to do with climate and everything to do with unearned power and wealth.

        • No matter what the problem, the solution is always more government. Even for problems that were caused by government in the first place.

    • That’s why they changed the name from global warming to climate change. That way they have it covered no matter what happens.

  3. Cherry picking data and timeframes to suit your pre-ordained conclusion has NOTHING whatsoever to do with Science. It’s nothing more than propaganda with charts and graphs.

  4. There are some very big question marks connected with this study.

    First it measures the skin temperature of the Earth, not the air temperature. These are usually close over ocean, but over land large differences occur.

    Second, measuring skin temperature is only possible in the absence of clouds, and cloudiness is very uneven in both time and space. Data quality for Sahara is probably excellent, for much of the Southern ocean it is probably extremely spotty. Note that it is by no means certain that temperature changes in cloudy and cloudless conditions must match over time. Cloudiness also varies strongly with season so the temperatures will be seasonally biased.

    As a matter of fact since the AIRS result is a rather extreme outlier it seems very likely that the results are affected by the cloud problem.

    Results near the Poles are also unreliable since AQUA is in a 98 degree inclination orbit and cross-track range is insufficient to extend to the poles.

    • “As a matter of fact since the AIRS result is a rather extreme outlier it seems very likely that the results are affected by the cloud problem.”
      >>

      Except AIRS is designed to minimize/negate the cloud obscuration issues and improve resulting temp data at multiple levels. But this can only be seen as a questionable recon indication (especially given the known bias) as it does not meet even half the time-scale needed for such an analysis to produce a significant conclusion. So, any conclusion = not even climate science.

      “Not even climate science!”

  5. “That’s an indication of the personal bias of co-author Schmidt”
    No, it’s an indication of what his topic is, which is surface warming. The abstract starts:
    “This paper presents Atmospheric Infra-Red Sounder (AIRS) surface skin temperature anomalies for the period 2003 through 2017, and compares them to station-based analyses of surface air temperature anomalies (principally the Goddard Institute for Space Studies Surface Temperature Analysis (GISTEMP)). “
    But anyway, on the figures presented here, with GISTEMP at 0.22°C/decade, RSS LT at 0.2, and UAH at 0.18, there isn’t even much of a discrepancy.

    “Critics of NASA’s GISTEMP have long said its higher temperature trend is due to scientists applying their own “special sauce” “
    There is no special sauce. The GISS code has been available for years. It is a simple calculation; I have been doing a similar calculation monthly for years (April here). And I get very good agreement with GISS, using unadjusted GHCN data.

    • Nick I stand by my comment about Schmidt’s bias. He’s shown it publicly on many occasions, just as you have. Both you and Gavin are of a particular bias when it comes to this GISS data. You both suffer from confirmation bias.

      As for your agreement with GISS data: so what?. It means nothing in the grand scheme of things than you get agreement with GISS by running their code.

      As for the “special sauce”, I stand by that comment too. GISS does their own special set of calculations in the GISS code, different than any of the other data sets, and that’s why it is always warmer than NOAA and HadCRUT4, not to mention UAH and RSS.

      That GISS special sauce could be called “hot sauce”.

      In other news, it looks like Dr. Roy Spencer has found an error in the AIRS data, so it looks like yet another “GISS MISS” for agreement in a long series of hot messes.

      • Anthony, it’s rather unreasonable comparing Nick our Gav. Nick comes here are is prepared to have sensible scientific discussions with skeptics. He does not refuse to debate and walk off in childish sulk.
        If we had more like him on the warmist side, things may get further.
        BTW

        whereas RSS and UAH have 30 years of data

        2018 – 1978 30 !

        • That is very interesting,
          Maybe this is a simple visual indication of temperature, that is less prone to error and bias than homogenising hundreds of incomplete thermometer readings ?

        • This is an extremely useful approach. Its value lies in the fact that it provides a variable that clearly relates to temperature over seasonal timescales with fluctuations large enough to be easily measurable.

          By contrast monitoring the average global temperature over long timescales is a fools errand because the large temperature variability in temperate latitudes is lost by lumping it together with equatorial temperatures which vary very little resulting in an overall variable (metric) that varies so little that errors in measurement are sufficiently significant to render the value of the end result questionable at best. This would be especially problematic when making comparisons over long timescales reaching back to the invention of the thermometer.

          While looking at great lake ice cover may or may not be of limited value depending on how far back and how well records were kept different ways of determining temperature variability based on this approach should be possible by identifying robust long-term temperature records, such as the central England temperature. (Although post war data would need to be checked for ‘improvements’.)

          I would be surprised to learn that I’m suggesting anything new.

          It’s just my way of pointing out that way the average global temperature as currently defined and determined is ideal if one wanted to be able to argue that one or two degrees is really important while also being able to argue convincingly that it is one or two degrees higher or lower than it really is because of statistical methods employed while determining its value.

      • But you didn’t answer the question: Does NASA’s Latest Study Confirm Global Warming?

        As if we needed *further* confirmation.

        This post is not about a “study” or about “confirming” anything, its just personal about Schmidt.

      • “In other news, it looks like Dr. Roy Spencer has found an error in the AIRS data”

        For years the acolytes have been using the fact that an error was found in Dr. Spencer’s calculations years ago, as an excuse to ignore his results.

        I wonder if they will apply the same standard to their own idols?

      • “Hot sauce”; In the days of dear old Pachauri, it was called “GISS Vindaloo” /sarc

  6. When I look at the graph, I see ‘noisy’ data. Of course, the scientists involved will protest that their analysis shows that the data is accurate within one percent, The trouble is that if you want to calculate a long term trend, you have to be able to explain the ‘noise’. Is the ‘noise’ truly random? The chances of that are miniscule. Does their analysis implicitly assume that the ‘noise’ is random? That’s almost guaranteed.

    Mother nature tends to throw red noise at us. Red noise has high low frequency content. If you examine a low frequency signal over a sufficiently short period, it looks like a trend (also called drift).

    The error bars that should be applied to the calculated trends for the various data sets mean that those calculated trends are essentially the same. To say otherwise, you have to be able to explain the nature of the ‘noise’ which is evident in the graph above. For sure, you can’t just assume that it is random.

  7. As in the Climate Gate exposure at UK CRU using email communications, it is going to take an insider’s knowledge to convincingly expose the intentional and continuous alterations of the adjusted temperature data.

    The problem is any financial incentive for an individual to be a whistle-blower inside NASA/GISS is insignificant to the subsequent income loss of ability to work in the field ever again. It is going to take someone with their own “F-U money” working at GISS to finally get tired of the compromised integrity in the continuous alterations of historical adjustments.

    • I have been hoping they will catch the eye of the President, since the administrator is worthless.

  8. Is GISS still peddling their Global Warming garbage. NASA, stop wasting money, you should be doing spaceflight.

  9. The methods used to sell a man-made climate crisis continue to be the same methods used by con artists, and not at all similar to the methods used by scientists.

  10. Without of obtaining insider info, the only way for an outsider to show what is going on in an objective manner is to do the Tony Heller-style Residual of(Adjusted- Raw data) versus annual-averaged MLO CO2 record plotting.

    If you can independently verify an r^2 > 0.95, that is a pretty powerful, independent argument of what they are doing.

  11. The global average temperature is a statistic, not a measurement of a temperature.

    There must be hundreds of ways to compile a global average.

    There’s no way to know which compilation is “the best”, or even if any of them are accurate, and a good representation of the actual climate.

    If one compilation keeps adjusting data, without good explanations, that could be a problem.

    Especially “adjustments” made years, or decades, later.

    If a compilation requires wild guess infilling for a majority of it’s surface grids, that’s a potentially large problem.

    There’s no way to know what a “normal” global average is.

    Using 1750 as “normal” makes no sense — that was not long after the coldest decade of the Little Ice Age ( central England real time temperature measurements reflect +3 degrees C. of warming since the coldest year in the 1690s ).

    Why start with a cool 1750 climate, that was not liked by people at the time?

    Using a cool weather starting period is useful for climate change propaganda ?!

    All averages obscure details.

    A global average obscures a lot of details.

    Even more important, is that no one lives in an average temperature.

    People live in local temperatures — if they are ever going to be harmed by climate change, then it will be from changes in local temperatures.

    But local temperature changes, that people did not like, might not even be visible in a global average.

    Or the global average might change in a way that leads to climate scaremongering, while people are very happy with their local temperatures.

    The single number global average temperature is a propaganda tool.

    Considering that 99.999% of the past 4.5 billion years have no real time data.

    So people are looking at less than 0.0001% of this planet’s temperature history,
    with very questionable numbers before World War II, and declaring that 1750 represents a “normal”, or good, temperature … and anything warmer (which people love) is bad?

    What people really need to know is WHERE the warming is happening
    In the upper half of the Northern Hemisphere, warming is good news!

    What people really need to know is WHEN the warming is happening
    In the coldest six months of the year, warming is most likely to be good news.

    What people really need to know is in WHAT HOURS warming is happening?
    Nighttime warming, while most people are sleeping, is much less noticeable than daytime warming, when people are much more likely to be outdoors.

    If the warming was mainly in the northern half of the Northern Hemisphere, mainly in the coldest six months of the year, and mainly at night, then what we have is very pleasant global warming — the few people who live in those high latitudes would want more of that !

    How much of the above details would anyone know from a global average temperature?

    None.

    The length of the growing season is important.

    Farming productivity is important.

    Sea level relative to ocean side homes, and businesses, could be important.

    The global average temperature, especially changes of less than one or two degrees C., over a century or two, is NOT important.

    Except as a propaganda tool for climate scaremongering.

    If we knew the exact global average temperature, that everyone agreed was accurate (if that was possible), that would provide no useful information about the future climate — we would still have NO IDEA if the future would have global warming, or global cooling.

    The global average temperature serves one purpose — it keeps skeptics busy arguing about meaningless tenth of a degree C. changes !

    Meanwhile, the evil climate alarmists are finally revealing their “Climate Plans”, which was the goal of decades of climate scaremongering — leftist central planning “to save the planet for the children” (nonsense, of course, the planet does not need saving — the current climate is wonderful).

    The leftists can’t sell socialism by claiming lower unemployment, or faster
    economic growth — so they have created a fake climate crisis, and they claim only they can prevent it.

    Sounds stupid, but it works on gullible people.

    The climate on our planet has been warming, and improving, for over 300 years since the coldest decade of the Little Ice Age (1690s) — only a fool would want that mild global warming to stop.

    Meanwhile, climate alarmists ignore the past 78 years of adding lots of man made CO2 to the atmosphere.

    They keep predicting a FUTURE global warming rate that will be QUADRUPLE the actual global warming rate from 1940 through 2018 (their +3 degrees C. per century FANTASY, versus +0.77 degrees C. per century REALITY, from 1940 through 2018).

    Their predictions have made no sense for 30 years — over 6o years if you start with Roger Revelle in 1957 — so why should government policies be based on consistently wrong climate predictions?

    What a bizarro world we live in — with people afraid of the staff of life — CO2.

    My climate science blog,
    if anyone is interested:
    http://www.elOnionBloggle.Blogspot.com

    • The single number global average temperature is a propaganda tool

      Boy, and how! And let me add that The Drought Monitor is guilty of similar statistical nonsense that uses questionable “averages” to declare CA just had the “WORST DROUGHT IN HISTORY”. The last CA drought was OVER nearly two years before The Drought Monitor said it was. Rubbish. It’s all colorful charts, graphs and FEAR … based on LIES. It took TWO years of HISTORIC snowfall and rainfall to extricate CA from DROUGHT terror?! Just imagine if our last two years were just … “normal”, “average” years of precipitation? … then The Drought Monitor would STILL have us in a “drought”.

      In my own limited 63 year lifetime in CA, I’ve experienced several droughts … which are NORMAL, “average” occurances in our ocean adjacency Mediterranean climate. Selling FEAR with statistical manipulation and colorful charts is … frankly … EVIL. Just as EVIL as Bernie Madoff “investing” your total net worth in a FAKE scheme promising “guaranteed” 15% return. Sadly, the general public are gullible rubes who want something for nothing, with all their might. And they love being frightened about their own existence. Love being told that the “End is nigh”. “Repent!!!” And “save” yourself! “Bow to the force of Gaia”. It’s all a giant scam for CONTROL of wealth. CONTROL over every aspect of your life.

      Oh! But it’s “science” don’t you know? I am just a “simpleton denier”. Ad hominem attacks on the straw men the Warmists construct are as worthless as their cherry picked data.

    • “The global average temperature, especially changes of less than one or two degrees C., over a century or two, is NOT important.”

      Phew, I thought a 2C increase would see hippos in the Thames, like last time. You’ve reassured me that is NOT going to happen.

      • It was a lot more than two degrees in England, more like 4 C.

        2 C is a global average which includes the tropics where there was little change, often no more than 1 C warmer than now. On the other hand it was 10+ C warmer in Eastern Siberia, and 5-8 C warmer in Greenland.

    • “… If the warming was mainly in the northern half of the Northern Hemisphere, mainly in the coldest six months of the year, and mainly at night, then what we have is very pleasant global warming — the few people who live in those high latitudes would want more of that ! … ”
      >>

      There is also warmer and more humid minimums at night in the (local) tropics which are quite unpleasant until a land-breeze kicks-in to reduce the humidity, after midnight, into the early hours. It does not make for good sleeping conditions without an aircon. And electrons for aircon use do not come cheaply any longer, as before the wind and solar subsidies and network infrastructure overbuild incentives.

      But I’ve also seen periods where this was not the case, where there were not as many hot and humid nights (more consistent trade-wind flow), yet in the decades before this there were hotter conditions recorded during the 1930s and 40s, and also in the 1890s. So this is just multi-decade scale weather cycling that’s been overprinting a general slow-warming that’s occurring since the low or ‘end’ of the Little Ice Age.

      Tropical nights have gotten warmer since about 1985, and they have got more humid also, and that combination is quite significantly unpleasant. But that is not actual climate change. Yet coming out of the Little Ice Age is an actual climate-change process.

      But that climate change process is natural, as are these cyclic multi-decade warmings and coolings over printing it. So yes, climate and weather are both changing, but no one with any understanding of either would have expected any different than what’s been occurring in the past 100 years.

      The only issue in question is, IF ANY PART OF IT was caused by humans? And the LOCAL answer is YES!

      UHI is a persistent human effect on the localized weather and Temperature.

      But UHI is NOT an effect on GLOBAL climate trajectory even if local UHI affects thermometers that are distributed globally, and are (for some reason) averaged to show a human affect on local warming, which occurs in all cities.

      But for >99% of the planet’s surface area, UHI it is NOT occurring at all. So what’s the point of a global surface data ‘average’? There’s none!

      Or in claiming there’s a human warming of the planet, i.e. which means for <1 % of it, and only in the lower-most troposphere layer.

      UHI is NOT planetary climate-changing, it’s only local weather changing.

      A satellite global average however might at least be globally applicable (sort of), but can such then show a gradual slow lingering warming trajectory out of the Little Ice Age’s actual climate change once ENSO and other broad weather cycle peaks and troughs are winnowed out?

      Over the past 20 years the answer is NO, the satellites could not show such.

      So why is anyone still pretending that satellites are global gauges of climate-change then? When they're obviously incapable of unambiguously resolving slow-scale global change even out of the LIA at present? And even if it did detect any slow NET rise it would still be just the amelioration of the Little Ice Age!

      Give it up climate worriers and citation list touters, you’re simply delusional and chasing your own tail if you think you're seeing global climate-change within a NET 20-year flat-line of a global T trend.

      Plus Gavin Schmidt seems to be implying he has the Holy-Grail of satellites in his little mit, and thus all prior satellite data is now defunct and can be ignored. Who's going to accept that?

  12. ” indicates there are significant discrepancies in estimated climate trends”

    No, they all look pretty much to be in agreement.

    The average is 0.20 per decade. A significant discrepancy would be if one of the datasets showed a negative trend over a decade.

    • It depends on the standard error of those means. If it’s ±0.01°C, it’s very significant; if it’s ±0.1°C, it’s not significant at all.

      But we’re not told, so we can’t tell.

      • “But we’re not told, so we can’t tell.”
        You’re not told here. But you could read the paper. Table 1: AIRS 0.24±0.12°C/decade. GISS 0.22±0.13. But those errors mainly represent variability of the random component of temperature (weather) rather than measurement error, so they aren’t independent. AIRS and GISS were measuring the same weather.

        • Nick, this is right for big areas of the globe. However, the zonal trends have some remarakable differences:
          https://s16.directupload.net/images/190417/kahxqd9n.png
          In the polar/subpolar areas the AIRS trends are steeper. AFAIK the AIRS record suffers from stable cloudcover (CC) and it’s well known that in the areas in question the CC is well above gkobal average. Therefore the result of AIRS needs some more attention there. One would wish some more describtion of this issue in the AIRS paper.

  13. Schmidt is merely acting like a ‘good climate scientists’ reflecting the fact he was hand-picked to carry on the ‘good work ‘ of Hanson. That was this means in practice is poor scientific practice and regarding headlines not truth has being the most important factor in ‘research’ is just normal for the area .
    Let us be clear climate ‘science ‘ is nothing without climate ‘doom’ , this goes so does the funding , so does their influence and power and so does all those freebies , and for most so like Schmidt , so does their career.
    They have no choice but the double down for otherwise its total bust.
    The real shame in this is not their behaviour, but that the gatekeepers that should be stopping this BS have chosen to either say nothing or spend their time working out how they to can get on gravy train of ‘climate doom research funding ‘

  14. Who decided that the new definition of “climate” is 15 years, NOT 30, and when figuring anomalies, use the previous 15-years as baseline?

    Let’s just use 15 days, so that “climate” becomes equal to “weather”. Who’s gonna notice?

    Special sauce goes best with re-fried definitions.

    • When a pause seen over 15 years, then 30 years must be chosen. But when a pause is not seen over a 15 year period, then 15 years must be chosen. In short: heads they win, tails you lose.

    • The 30-year period was established by the meteorological society for purposes of developing weather almanacs, but for purposes of measuring quantifiable changes in “climate” no one ever bothered with any kind of standard limiting what kind of statistics count as “climate” and over what period of time such statistics must be averaged. That would be too dangerous. If too short a time is established, then historical temperature plots won’t show any kind of stable statistics to claim that we’re disrupting. If too long a period is used, then researchers would have to wait too long to trumpet their alarmism.

      So basically they just wing it, picking statistics on the fly to suit their narrative. Five year running means on the same graph that has to use a 30-year average to measure the baseline climate to plot anomalies might not make logical sense, but as long as it paints the picture the researchers want, that’s what counts. Linear trend lines through what climate researches admit is just noise may not make sense to any person who knows what a derivative is, but then again the audience usually isn’t technically versed enough to ask that, if a graph uses a 30-year average as a baseline of “climate” from which to measure anomalies, why doesn’t the graph just chart the slope of that 30-year average over time, to visually display how “climate” has changed through the years.

      • This is a big if, but if the major ocean indexes change in approximate 60 year cycles, then the climate period really needs to be something greater than that to get the whole cycle in the picture. Of course the fact the various ocean cycles are never in sync means that to measure climate could be a thousand year process and no one has instrumental data at pristine, unmoving sites going back a tenth of that (maybe one or two.) Proxy data can’t wiggle match like instruments as they are all too low frequency to see many of the short term changes we are worried about today (ok, I’m not worried but some are!)

    • I suspect the 15 years was used since that is how long the satellites have been operational for.
      Would you prefer no data for 30 years or updates on a more regular basis?

      • I’d prefer to wait not only 30 years, but actually to wait at least 150 years because that’s about the minimum length of time I would expect to be required to gain any meaningful data. A temperature graph of only 15 years is just noise. The right hand side of FIG. 1 proves that, since it’s the single event of the El-Nino at the end that accounts for virtually all of the trends in both data sets.

  15. Okay, so now we seem to need to make a distinction between “skin temperature” and “air temperature”.

    Is this paper even comparing the same sorts of data?

    Is this just adding to the confusion, instead of clarifying anything?

    But more importantly, are hundredths of a degree really deserving of so much hype?

    My confusion was stirred by this paper from 2003, after I went internet hunting for some explanation of Earth’s … “skin temperature” [thanks to Nick S]:

    http://www.geo.utexas.edu/courses/387H/Lectures/1-Land%20Surface%20skin%20temperatures%20from%20a%20combined%20analysis%20of%20microwave%20and%20infrared%20satellite%20observations.pdf

  16. When there are multidecadal and longer-scale cycles of various bandwidths in play, mere comparison of 15-year “trends” of intrinsically different, far-from-strongly coherent temperature metrics is a foolhardy exercise. It provides stark indication of how tendentious the analytically primitive conclusions of “climate science” are.

    BTW, satellite sensing of atmospheric temperatures is now in its 40th year, which is still far from adequate for detection of truly secular trend.

    • “mere comparison of 15-year “trends” of intrinsically different, far-from-strongly coherent temperature metrics is a foolhardy exercise”
      The point of their paper is that they are coherent, and they don’t rely on trends to show that. They aren’t intrinsically different, in that both methods are trying to measure the same quantity, so the fact that they get consistent answers over 15 years is relevant.

      • I’m addressing not only the paper, but also Anthony’s trend comparison with UAH LT temps, which are not strongly coherent with near-surface temps. Moreover, skin temperature is intrinsically a different metric than air temperature, with different physics in play. Consistency over 15 years is hardly decisive.

  17. Climate and temperature need to be determined over thousands of years, not 15 or 30, and only well after the fact.

  18. I’m sure everyone has been waiting with bated breath the results from my method, hereafter to be known as SIMPTEMP.

    SIMPTEMP: +0.07
    AIRS: +0.24 (from the 2019 Susskind et al. study)
    GISTEMP: +0.22
    ECMWF: +0.20
    RSS LT: +0.20
    Cowtan & Way: +0.19
    UAH LT: +0.18
    HadCRUT4: +0.17

  19. Well of course it shows an uptrend. The data set starts with a trough and ends with a crest. Data conforming to a pure sine wave with zero trend would do the same. What kind of imbeciles are doing this childish rubbish?

  20. From DEC 1978 to 2019 UAH V6 is 0.13 c a decade. Fully 40 years or am I missing something?
    So where does the 0.18 c decade come from? Just asking.

  21. Has anyone ever quantified the amount of heat from below? There are thousands of submarine volcanoes comstantly spewing lava and superheated water along the mid-ocean ridges. Do they include those in their climate models?

  22. I have yet to see a single study that claims to have measured the rate of temperature change that complies with the “ISO Guide to the expression of uncertainty in measurement” which requires any reported numerical measurement result to be accompanied by the measurement uncertainty and the applicable coverage factor (confidence limits). If the MU of these various reported trends is on the order of +/- 0.05 C there’s not clear difference.

    From another angle, I look at this fixation on 15, 30, or 100 year trends as similar to gamblers who look at roulette wheel results and see trends in “red/black” or “even/odd” outcomes and think they can beat the odds. Of course some think the trend will continue and others think it will surely reverse. Casinos love them both.

  23. “Oddly, the study didn’t compare two other long-standing satellite datasets”

    Those measure the troposphere, not the surface.

    “That’s an indication of the personal bias of co-author Schmidt”

    No, but this and other sentences reflect the common bashing of scientists that is central to online climate science criticism.

    “who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset”

    The result of this, if you’ll recall, was UAH being adjusted to look more like GISS, similar to today, though it was a long process. At one point in time UAH suggested the troposphere was cooling, and supporters were certain it was the surface record that was wrong. (Remember the days when it was all about proving that the world was not actually warming?)

    “HadCRUT4 shows the lowest temperature increase, one that’s nearly identical to UAH”

    “GISTEMP has greater polar coverage than MLOST or HadCRUT4, in part due to the inclusion of Antarctic ‘READER’ stations and in part due to the interpolation method”
    https://climatedataguide.ucar.edu/climate-data

    “Critics of NASA’s GISTEMP have long said its higher temperature trend is due to…”

    (the same thing internet critics say is behind every scientific observation or result they don’t like.)

    “there has been rapid cooling from that El Niño-induced peak in 2016”

    Indeed. Has “No warming since 2016!” been officially endorsed as a talking point? How soon after 1998 was it before “No warming since 1998!” started up across the main channels? That had a pretty effective run for a talking point so seems worth a similar playbook. I’d think Heartland has the most expertise on making these calls…

  24. “Why would Schmidt, who bills himself as a professional climatologist, break with the standard 30-year period? It appears he did it because he knew he could get an answer he liked, one that’s close to his own dataset, thus “confirming” it.”

    Bingo!

    Excellent article Anthony!

  25. According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed.

    Since UAH6.0 started 40 years ago, the actual global warming trend is at 0.13C/decade, which is almost half of AIRS’ trend of 0.24C/decade.

    The PDO, AMO, AOO are all about to (or about to) start their 30-year cool cycles, the next La Niña cycle should be a strong one, and a 50-year Grand Solar Minimum just started.

    All these global cooling phenomena will very likely cause significant global cooling, which will finally put an end to one of the biggest and most expensive Leftist Hoaxes in human history…

    “Truth is the daughter of time.”

    • “According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed.”

      Why is it so difficult to realise that EN’s push the GMST trend (though UAH is not the surface) up and LN’s push it down?
      This on top of a general warming trend.
      There is no law that says which came first, an EN or a LN.
      And saying which one to start/end with, does not answer any questions about the long-term trend.
      The long term trend is chosen precisely because it eliminates short-term NV – and yes that also covers the period of the ’98 EN which was followed by a long period of -ve PDO that suppressed EN’s.

    • “According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed”

      (a) This isn’t true, (b) you can’t just adjust the warmth out and say that shows there wasn’t warming – that’s circular reasoning. For example, why if the 1998 and 2016 El Ninos were similarly sized (according to ENSO metrics) did 2016 end up ~0.2°C warmer?

      Yes, UAH is among the lowest warming trends, but it shows warming (as you appear to know). It also shows progressive cool bias relative to RATPAC (radiosondes) since 2000 or so, so skepticism about which troposphere record is correct is reasonable.

      A lot of PDO cooling in the past 30 years, hasn’t slowed things up any. The problem is physical – shifting of wind and ocean currents can’t powerfully affect global temperatures because it is mainly just sloshing heat around. Can’t, for example, heat the global oceans to 2,000m or reverse the (radiative) heating that is happening there.

      “Truth is the daughter of time.”

      Indeed. And what truths have kept getting reinforced over time so far? For how many years or decades now have those who aren’t happy with what’s happening predicted imminent global cooling?

  26. “One of the co-authors is NASA’s Dr. Gavin Schmidt,”

    No need to read any further. Wasted spending on Alarmism Politics.

  27. Andy wrote: “As you can see in Figure 1 above, there has been rapid cooling from that El Niño-induced peak in 2016, and the global temperature is now approaching what it was before the event. Had there not been an El Niño event in 2015 and 2016, creating a spike in global temperature, it is likely Schmidt wouldn’t get a “confirming” answer for a 15-year temperature trend. As you can see in the figure above on Panel B, the peak occurred in early 2016, and the data trend before that was essentially flat.”

    Andy, blaming the trend on the 2015/6 El Nino is nonsense. If you start with the 2001 to 2012 period, there is a negligible warming trend. As soon as you start adding years after 2012, you get a rising trend. The problem is that the average temperature since 2014 has been about 0.2 K higher, even if you ignore the peak warming associated with the El Nino. If you replaced the temperature for six months before and after 1/2016 with the average temperature since 2014 outside this period, you will still have a large warming trend. There is a rapid increase of almost 0.2 K being obscured by the dramatic El Nino. Almost every month since the El Nino has been warmer than the warmest months of 2001-2012 (the Pause) and on the average outside the El Nino they have been about 0.2 K higher.

      • Richard: Yes, it does make a difference which temperature record you use and precisely when you start and stop. However, my comment was directed towards Andy’s statement that cooling since the 2015/6 El Nino has returned temperature to the temperature of the Pause period. That is grossly incorrect. Since the El Nino ended in late 2016, the average temperature has been about 0.2 K higher than during the Pause.

  28. Does NASA’s Latest Study Confirm Global Warming?”
    Does a bear sh** in the woods?
    Of course it does and if the numbers fail to support the claimed warming,they will be adjusted until they do.
    Policy based evidence manufacturing is a serious business, in the modern bureaucracy.

    Until this Agency is reset,they will support the narrative regardless of the actual evidence.

    The CAGW meme is clear evidence our bureaus have declared war on the tax paying citizen.
    Serving a higher purpose?
    Mass firings are the only cure, the only way to explain to these fools that government is not a place for religious zealots.

    By using their positions to force change upon the citizen,they have destroyed the institutions.
    Institutions they corrupted to serve the “cause”,demonstrating their institute has no useful function.

    The corruption this mass hysteria has revealed,proves once again how dangerous government is.
    Big government will consume until all resources are extinct.
    Taxing Air.

    • “The corruption this mass hysteria has revealed”

      meaning

      “The (secret global collusion and) corruption that we REPEATEDLY insist exists reveals that we do not have to accept observational science if we do not wish to.”

  29. As many have indicated this is just plain nonsense. It is measuring noise. What is key is the very same people who used to complain about the pause being due to having a super El Nino at the beginning of the trend now are perfectly happy to use a super El Nino at the end of the trend.

    This alone shows how dishonest the entire climate cult has become.

    • Do you have an example where “these people” (scientists) do what you claim (dishonestly count the El Nino at end and not the one at the beginning?)

      The actual claim that I hear them state is that you should *either* (a) leave ENSO effects in the data consistently, or (b) if you want to remove ENSO statistically (as a way to analyze the data) do it consistently to both El Nino and La Nina.

      In my experience, it is only critics (on sites like this) which argue that you should remove only the 2016 El Nino and nothing else, in order to argue the pause has continued. Which (if true) “alone shows how dishonest the entire climate cult has become”, does it not?

      In general, your comments sound like you think the online hypothesis from science critics, that the “pause implies global warming has stopped”, has been unfairly falsified by the real world data because you shouldn’t accept the El Nino at the end. We’ve had La Nina since then though, and temps are still higher than ‘pause’ levels a decade earlier.

      What you allege certainly isn’t what is happening here. They are just comparing the AIRS record to the GISS record.

      • Geoff, you are living in a fantasy world. The La Nina events were minimal and likely had very little impact on global temperature. Even NOAA admitted the warm water from the super El Nino hung around until 2018.

        That means we never really had a chance to completely return to the baseline before the current El Nino started last September.

        I agree that all ENSO effects should be removed from the data. When that is done (along with volcanoes), there’s been no warming in 20 years and only .25 C since 1980.

  30. It’s all ENSO and noise. None of the models show stepwise GAT’s. They show steady trends from a forcing plus assumed but disproven amplification.

  31. Calculating a decadal trend using 15 years of data makes no sense. And why do we use these trends over arbitrary periods anyway? The data may show warming but if they do they show that it is no way smooth or even or predictable. If anything the data shows step changes followed by declines or plateaus. That doesn’t fit very well with the CO2 claim though so instead we see these claims about trend warming that are wholly subject to cherry-picking, arbitrary smoothing and arguments about noise. In other words, a total lack of both clarity and rigour. And all because most of those studying the data are determined to fit it to their preconceived view if his the data should look.

  32. Everyone concedes the planet is warming. That isn’t the question.

    The questions begin with, how much effect does man have? How much do we contribute to the current rise?

    The more important question is what role does CO2 play? If any? Considering that man’s contribution to atmospheric CO2 is less than 100 PPM, is that enough to have any affect on the climate? Does CO2 really have any feedbacks? Are they positive or negative or insignificant?

    Finally there is the question of what should man do about. Take drastic action now without knowing the answers to the above questions. Or wait and remediate?

  33. Earth’s average temp has never been constant — it’s either rising or falling. OK, assume right now it’s rising. So?

  34. Slyentist: Noun
    A scientific expert who makes data conform to the support of their hypothesis.

    Data used is often unavailable and results are only replicated by other slyentists.

  35. could someone please define “climate” by some sort empirical measure ? more “bad” weather is not measurable unless “bad” is defined … (same with severe …) … the only 2 climate extremes I see are temperature based … 1) snowball earth … and 2) not snowball earth with a thriving biosphere … we seem to have had severe weather going from one to the other in either direction …

    If the warmists are claiming that the “climate change” we will see is more hurricanes, droughts, (or floods) or higher temperatures … then the record of the 20th century says we are only seeing somewhat higher temperatures (up to 1938 … lower since then) … so by any measure the “climate” has gotten milder not more severe …

  36. The satellite temps aren’t better than ±0.3 C.

    The surface station data aren’t better than ±0.5 C (that’s a well-sited well-maintained unaspirated USHCN station).

    And Gavin, & co are yodeling about a 0.24 C change across 15 years.

    Let’s see: they’re claiming to resolve an average 0.016 C annual change against ±0.3 C resolution.

    The uncertainty is 18.8 times larger than the signal.

    What_a_crock.

    I’ve talked with Roy Spencer about the satellite uncertainty, by the way. He agrees with that ±0.3 C number (it comes from his work with John Christy). But like everyone else in the field, Roy thinks that taking anomalies subtracts away all the error and uncertainty.

    It’s too funny.

    The whole field lives on false precision. And then they have solemn discussions about the oracular meaning of it all.

    • Frank, what is “too funny” is how you confuse an individual measurement of a physical temperature, with a statistical estimator of GAST. Classic apples versus oranges. Not only that, but Roy is correct in that anomalies erase your often touted “systemic error.”

      • “an individual measurement of a physical temperature, with a statistical estimator of GAST”

        And there lies the rub. Can a statistical estimator have smaller standard error than error on individual measurements? If some conditions are met, yes, it can. But it’s wrong to assume it always does.

        My first job was in an ISO17025 calibration laboratory, and that environment is rather fastidious when it comes to determination of uncertainties. The lesson I took home is that you should never assume the best case scenario unless you can prove it.

        Again, using anomalies will remove some of the errors, but measurement error never stops propagating.

        • Flavio, there is no possible way to make or have an individual measurement measure GAST (Global Average Surface Temperature.) The only possible way to make such a measurement is with a statistical estimator.

      • Let’s see your demonstration that systematic error is removed from satellite temperatures by differencing, Mike.

        Let’s also see you disprove the standard statistical propagated error of a difference, namely that for a-b=c, with error e_a & e_b, the uncertainty in ‘c’ = sqrt[(e_a)^2 + (e_b)^2] .

        Judging from your post, you don’t know what you’re talking about.

        Systematic error is only known to subtract away when the error magnitude itself is known and is known to be constant.

        That’s not the case with satellite temperatures, and is not the case with unaspirated USHCN stations.

          • So you abandon your claim of knowledge, Mike. You’re just making an argument from authority.

            I have spoken with Roy about the systematic error problem. He just shrugged it off.

            They just assume the error is constant and disappears on differencing. The assumption is methodologically unjustified.

        • Frank…..satellites don’t measure temperature, so your argument’s about propagating error(s) is invalid. Apples and oranges sir.

          • Satellites measure radiance, Mike, which is converted into temperature. Errors in radiance convert into errors in temperature. Those errors get propagated.

            The logic is coherent throughout, Mike.

            If the errors could not propagate through the calculation, the calculation would be logically discontinuous; a fatal problem to any branch of science.

          • The conversion is based on a model Frank, not on anything else. Satellite radiance-temperature models are just another example of GIGO.

            Why don’t you explain to all of us the relationship between radiance error and temperature error? Is it even linear, or non-linear?

            I’ll trust Roy Spencer on this more than you.

          • Mike, Roy’s satellite method uses radiance to derive temperature.

            You call that GIGO. Fine.

          • I’ll repeat my question to you Frank, you didn’t answer it…
            .
            Why don’t you explain to all of us the relationship between radiance error and temperature error? Is it even linear, or non-linear?

          • Mike Borgelt claims: “satellites don’t measure temperature … The conversion is based on a model Frank, not on anything else. Satellite radiance-temperature models are just another example of GIGO.”

            Well Mike, what is temperature? There are at least two technical definitions: one from thermodynamic based on entropy and one from the kinetic theory of gases (temperature is proportional to the mean kinetic energy of a large group of colliding molecules).

            What does a traditional mercury thermometer measure? The thermal expansion of mercury – not temperature. There is a linear model and error in converting expansion to temperature.

            What does a thermocouple measure? Electricity current, not temperature. There is a linear model and error in converting current to temperature.

            What does an infrared thermometer (used in the human ear canal) measure? The RADIANCE of thermal infrared photon emitted by the skin arriving at the surface of a detector. There is a model (Planck’s Law) and error in converting current to temperature.

            You can measure the temperature of an enclosed gas by measuring its pressure. There is a model and error in converting pressure to temperature.

            You can, as Galileo did, measure temperature by density. There is a model and error in converting current to temperature. You can buy one of these (the glass balls with colored liquid floating or sinking in an oil in curiosity shops. There is a model and error here too.

            The microwave sounding units on satellites are another valid way of measuring the average temperature in a section of the atmosphere. There is a model (Planck’s Law) and error in converting these radiances to temperature. The results agree with the thermocouples carried aloft by radiosondes. There are massive technical problems dealing with drifting satellites and challenges dealing with aging MSUs.

            The real GIGO problem is with the ignorant garbage we put in our minds, and the garbage that comes out when we communicate. If the only things we retain are information from unreliable sources that agrees with our biases and deeply held beliefs, then what comes out has nothing to do with science. Scientists are required to confront the problem of confirmation bias.

          • Frank (not Pat Frank) says: ” The microwave sounding units on satellites are another valid way of measuring the average temperature in a section of the atmosphere.”

            Your problem Frank is that microwave sounding units do not measure temperature. Plank’s law does not deal with
            microwave brightness, it deals with spectral density You know full well that
            the the emissions from oxygen molecules is not a “black body” which Plank’s law
            applies to. Plank’s law is not linear, and therefore the error in microwave brightness
            measurements is not linearly related to the error in the temperature calculated from this
            measurement.

            You say: ” There are massive technical problems dealing with drifting satellites and challenges dealing with aging MSUs.” and I
            wholeheartedly agree with you. Mr. Pat Frank seems to think that he has a handle on the
            error bounds of these satelite measurements, when in fact he hasn’t a clue. I trust what
            Roy Spencer says about this, much more than what Pat Frank says.
            ..
            The GIGO comes from the multitude of parametric fudge factors UAH and RSS “models” use to
            convert the microwave readings into “temperature.” These fudge factors have been determined
            by curve fitting the radiosonde readings with the data from the satellites.
            ..
            Lastly, all of this satellite measurement then is used as an estimator for a GAST, which brings in
            the two factors of the variance of the individual microwave reading versus the standard error term of the
            estimator.

            So, not only is a “model” that has a non-linear error propagation for genertating temps, then we have the fact
            that said model was constructed by curve fitting to generate an statistical estimator for GAST. Then Mr. Pat Frank has the audacity to claim he knows more about the error in this than Roy Spencer.

          • Your question is irrelevant, Mike. The conversation is about measurement error.

            But FYI, here‘s Roy Spencer’s explanation of method. It is he you should have asked concerning method, since it’s his view you tried to defend.

          • Pat Frank says: “The conversation is about measurement error.” ……..Yes it is.

            My question is: “Is it even linear, or non-linear?”

            You say: “Your question is irrelevant”………No, it’s absolutely relevant.
            ….
            You claim: “Errors in radiance convert into errors in temperature.”……….good, so you know about the relationship.
            ….
            So answer the question, since you know all about it………is the conversion linear, or non-linear?

          • Mr. Pat Frank, the chemist that thinks he knows all about measurement error cannot answer the simple question: “What is the relationship between the measured error by satellites of radiance and the derived temperature error? Is it linear, or non-linear?” Got a formula?

          • Mike Borgelt complained about this statement: ” the true error in any given measurement of the field instrument is unknown.”

            “The precision and the accuracy of the temperature sensors is known. In fact you can obtain a calibration trail for each.”

            Sure, but these are laboratory assessments. The reading of AIR temperature by a sensor can be changed by conditions in the field: Direct sunlight, wind, shadows, height above the ground, nearby sources of heat or cooling, the nature of the ground: grass, dirt, blacktop. On a sunny, calm day, air adjacent to the ground can be tens of degC higher than the air around your head. Colder air sinks into local hollows on still nights. Until we learned to put thermometers in adequately ventilated enclosures a specified height above the ground with low vegetation and exposed to direct sunlight far away from buildings with a significant heat capacity, measurement of air temperature was an irreproducible process. Even changing from reading a min/max thermometer in the morning to the evening has a significant effect.

          • Yes Frank, that is why climatologists use anomalies instead of absolute readings from any given site to measure changes in climate. By using anomalies, it eradicates most of the issues you bring up.

          • Mike Borgelt wrote: Yes Frank, [the effect of station siting on temperature readings] is why climatologists use anomalies instead of absolute readings from any given site to measure changes in climate. By using anomalies, it eradicates most of the issues you bring up.

            Nevertheless, when the data from nearby stations is compared, essentially all station records show mostly undocumented discontinuities/breakpoints in the record that allegedly are absurdly unlikely to have occurred by chance. Many records show breakpoints averaging once every decade! These breakpoints are hypothesized to represent step-function changes in systematic error and are corrected. While correction sometimes warms and sometimes cools the present compared with the past by large amounts (0.5 degC is not unusual), the net result is to add 0.2 degC of warming to the overall 20th-century land record. So systematic errors are a non-trivial issued.

        • Pat: Your formula for standard statistical propagated error is derived based on the ASSUMPTION that e_a and e_b are random noise. When talking about systematic error, there is a constant relationship between the error terms! That is why the word SYSTEMATIC is used. You can not use this formula to analyze systematic error.

          For simple random error in a linear process, we say:

          y_i = m*x_i + b + e_i

          But for systematic errors, every measurement of y is off the same amount e_s, the systematic error:

          y_i + e_s = m*x_i + b + e_i

          Mathematically, when you do a least-squares fit to this data, the systematic error only biases the y-intercept (b), not the slope/trend (m):

          y_i = m*x_i + (b + e_s) + e_i

          That systematic bias doesn’t need to be present in every measurement. Suppose we are talking about rising temperature (y) vs time (x). If 1/4 of days are calm and there is a CONSTANT UHI bias on calm days of (e_s) AND the fraction of calm days doesn’t change with time, the end result will be:

          y_i = m*x_i + (b + e_s/4) + e_i

          The trend will be unchanged by e_s.

          You can certainly construct an artificial dataset with these properties and prove this for yourself.

          • Systematic means deterministic, Frank. It doesn’t mean constant.

            When systematic error is due to uncontrolled variables, as is true for all the USHCN earth stations, and very likely so for the satellite temperatures, then the error is both not constant and of unknown magnitude.

            The propagation formula is appropriate.

            You’re making the standard folk-tale argument of climate modelers. They have no concept.

          • Pat: Trend assessment by OLS is based on the ASSUMPTION that the error term (e_i above) is randomly distributed and has a mean of zero. Right?

            When you have a constant systematic error as mathematically formulated above, I’ve provided the correct math. Right?

            Now, I believe you are saying that my systematic term s is not truly a constant, that perhaps it should be written as a noisy systematic error (s + s_i), or a partially time-dependent noisy systematic error (s + s(t) + s_i). The s_i terms add to the e_i terms and become the typical noise that we already know how to deal with. Right?

            The constant systematic error term s effects the y-intercept, but not the trend. Right?

            If so, I’ll enthusiastically agree with you that a time-dependent systematic error s(t) is a real problem. I don’t know whether the term “systematic error” implies a constant s or a time-dependent s(t). Math is a more precise language than English. And I’ll be glad to agree that we don’t know the relative sizes of s and s(t). I’ve repeated said that a constant bias doesn’t change the trend, it is only a changing bias that matters. Andy’s surface station project is innately flawed because a changing bias can’t be detected from the quality of today’s station site.

            The guys doing breakpoint correction are assuming that s(t) is constant except for discrete discontinuities at certain points in time. However these discontinuities could be biases that grow with time and are suddenly corrected by station maintenance.

        • Pat: Your formula for standard propagated error is only valid when e_a & e_b represent random noise. By definition, systematic error is not random!

          Let’s consider extracting a trend for temperature (y) vs time (x) data of the form

          y_i = m*x_i + b + e_i

          We do a least squares-fit and everything turns out right as long as the e_i terms are randomly distributed with a mean of zero. Now let’s add a systematic error which makes all readings too high by s, a constant systematic error independent of i and x.

          y_i + s = m*x_i + b + e_i

          After performing a least-squares fit, we get the same value for the slope/trend (m) and a systematically biased value for the y-intercept equal to b-s. When we are dealing with temperature anomalies and only care about the trend, a constant systematic error doesn’t interfere with finding the correct trend. The error doesn’t even have to be constant. If on calm days, UHI increases the temperature by s degrees and not at all on windy days, and if 1/4 of the days are windy throughout the entire period, linear regression will provide the correct slope and a y-intercept of b – s/4. Systematic errors of this type never interfere with calculating an accurate trend as long as they are constant over the entire interval. If you don’t believe me, try it with some pseudo data whose properties match those described above.

          • Your formula for standard propagated error is only valid when e_a & e_b represent random noise.

            Not correct. Propagation is recommended for use to determine the uncertainty for any repetitive appearance of systematic error. Read through Section F. 2.4.5 here.

            In the case of the satellite temperatures, and of the USHCN stations, the true error in any given measurement of the field instrument is unknown.

            All one can do is estimate the range systematic error by way of calibrations under conditions that duplicate the field. The calibration uncertainty is then applied to measurements obtained from the field instrument.

            That uncertainty is propagated in the usual way, when measurements are combined, averaged, differenced, etc. However, as the errors are non-random, the uncertainty never, ever averages away.

            You can find a very useful set of definitions and uses of error from a physics perspective, here.

            Notice that the final entry, the “law of propagation of uncertainty” does not limit the propagation to random error, but rather indicates application to uncertainty as a general case.

          • ” the true error in any given measurement of the field instrument is unknown.”

            Pat Frank shows he is ignorant. The precision and the accuracy of the temperature sensors is known. In fact you can obtain a calibration trail for each.

        • Pat: Thank you for the links to definitive sources of information about statistical uncertainty. However, as best I can tell, all of these sources are referring to the uncertainty in one quantity. We are dealing with a DIFFERENT PROBLEM, calculating a trend from many measurements. So these links are irrelevant to the problem of calculating temperature trends.

          To the best of my knowledge, each day’s high and low is the result of a single measurement. All of the measurements for each month are averaged. We enter the average for each month into a linear regression without taking into account the standard error of the monthly mean. We can do this because each monthly mean analyzed by a linear regression is assumed to have an error (e_i) that can arise from BOTH random error (standard error in the monthly mean) and/or random deviation from a linear relationship between x and y (or time and temperature, if you prefer). In a linear regression, we assume there is no systematic error because the standard assumption is that the mean of e_i is zero and that e_i is randomly distributed about zero.

          If there is a systematic error in each monthly, it can be dealt with as I described above, by including a constant systematic error (s), or a noisy systematic error (s+s_i) or a time-dependent noisy systematic error (s + s(t) + s_i).

          If there is a constant standard error, the note in your first link advocates subtracting it from each y (temperature). This is what is done when temperature records are homogenized by hypothesizing that undocumented breakpoints are caused by a step-function change in systematic error. Without documentation, there is no way to test this hypothesis. An alternative hypothesis is that breakpoints are caused by systematic error that increases with time and is abruptly corrected by maintenance.

  37. NASA climate was basically set up and the people selected by 5x arrested eco nutter Hansen. So, there’s no doubt that everyone there is a dyed in the wool eco-nutter who is spending all their time trying to “prove” something that is unprovable because it’s not happening.

  38. “with the author of the article adding, ‘New evidence suggests one of the most important climate change data sets is getting the right answer.’ ”

    And there’s the problem. Good data IS the right answer. When you see the purpose of data gather as being to “get the right answer” then you are turning the scientific method on its head.

  39. My analysis of “CERES_SSF1deg-Month_Terra-MODIS_Ed4A_Skin temperature 200003-201809.nc”: 12 months mean: 0.12 +/- 0.06 °C/Decade

  40. Does this mean I can use any recent El Nino peak as starting point, run my graph until before the next one starts and claim it proves the world is cooling?

  41. -A Washington Post headline read, “Satellite confirms key NASA temperature data: The planet is warming — and fast,” with the author of the article adding, “New evidence suggests one of the most important climate change data sets is getting the right answer.”-

    Pretty much says everything right there, tons of confirmation bias regarding the adjusted data giving the “right answer”, to denial of the industrial era being at the tail of a rapid cooling and the temperature rise is largely a rebound.

  42. Much the most interesting part of the new NASA report seems to be Figure 2, showing Zonal differences
    in warming, 2003-2017. This shows a sharp fall in temperature in the most southerly latitudes and a
    sharp rise in the most northerly, 80 to 90 degrees north. The rest of the world, presumably 80 to 90
    per cent of it, shows hardly any change. Surely this means it is wrong to talk about GLOBAL warming;
    it is only Arctic warming.
    Is there a publication which shows the WEIGHTS of the different latitudes and continents in the
    published global figures?

Comments are closed.