Does NASA’s Latest Study Confirm Global Warming?

Some heated claims were made in a recently published scientific paper, “Recent Global Warming as Confirmed by AIRS,” authored by Susskind et al. One of the co-authors is NASA’s Dr. Gavin Schmidt, keeper of the world’s most widely used dataset on global warming: NASA GISTEMP

Press coverage for the paper was strong. ScienceDaily said that the study “verified global warming trends.” U.S. News and World Report’s headline read, “NASA Study Confirms Global Warming Trends.” A Washington Post headline read, “Satellite confirms key NASA temperature data: The planet is warming — and fast,” with the author of the article adding, “New evidence suggests one of the most important climate change data sets is getting the right answer.”

The new paper uses the AIRS remote sensing instrument on NASA’s Aqua satellite. The study describes a 15-year dataset of global surface temperatures from that satellite sensor. The temperature trend value derived from that data is +0.24 degrees Centigrade per decade, coming out on top as the warmest of climate analyses.

Oddly, the study didn’t compare two other long-standing satellite datasets from the Remote Sensing Systems (RSS) and the University of Alabama at Huntsville (UAH). That’s an indication of the personal bias of co-author Schmidt, who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset. In fact, Schmidt’s bias was so strong that when invited to appear on national television to discuss warming trends, in a fit of spite, he refused to appear at the same time as the co-author of the UAH dataset, Dr. Roy Spencer.

A breakdown of several climate datasets, appearing below in degrees centigrade per decade, indicates there are significant discrepancies in estimated climate trends:

  • AIRS: +0.24 (from the 2019 Susskind et al. study)
  • GISTEMP: +0.22
  • ECMWF: +0.20
  • RSS LT: +0.20
  • Cowtan & Way: +0.19
  • UAH LT: +0.18
  • HadCRUT4: +0.17

Which climate dataset is the right one? Interestingly, the HadCRUT4 dataset, which is managed by a team in the United Kingdom, uses most of the same data GISTEMP uses from the National Oceanic and Atmospheric Administration’s Global Historical Climate Network. Among the major datasets, HadCRUT4 shows the lowest temperature increase, one that’s nearly identical to UAH.

Critics of NASA’s GISTEMP have long said its higher temperature trend is due to scientists applying their own “special sauce” at the NASA Goddard Institute for Space Studies (GISS), where Schmidt is head of the climate division. But what is even more suspect is the fact that while this is the first time Schmidt has dared to compare his overheated GISTEMP dataset to a satellite dataset, he chose the AIRS data, which has only 15 years’ worth of data, whereas RSS and UAH have 30 years of data. Furthermore, Schmidt’s use of a 15-year dataset conflicts with the standard practices of the World Meteorological Organization, which states “as the statistical description in terms of the mean and variability of relevant quantities over a period of time… The classical period is 30 years…”

Why would Schmidt, who bills himself as a professional climatologist, break with the standard 30-year period? It appears he did it because he knew he could get an answer he liked, one that’s close to his own dataset, thus “confirming” it.

The 15-year period in this new study is too short to say much of anything of value about global warming trends, especially since there was a record-setting warm El Niño near the end of that period in 2015 and 2016. The El Niño event in the Pacific allowed warm water heated by the Sun to collect, dispersing heat into the atmosphere and thus warming the planet. Greenhouse gas induced “climate change” had nothing to do with it; it was a natural heating process that has been going on for millennia.

Figure 1: At left, Panel A NOAA sea surface temperature data showing peaking of the 2015/2016 El Niño event in the equatorial Pacific Ocean. Panel B is Figure 1 from Susskind et al. 2019 with annotations added to illustrate correlation with the peak of the 2015-16 El Niño event in AIRS data.

As you can see in Figure 1 above, there has been rapid cooling from that El Niño-induced peak in 2016, and the global temperature is now approaching what it was before the event. Had there not been an El Niño event in 2015 and 2016, creating a spike in global temperature, it is likely Schmidt wouldn’t get a “confirming” answer for a 15-year temperature trend. As you can see in the figure above on Panel B, the peak occurred in early 2016, and the data trend before that was essentially flat.

It appears that the authors of the Susskind et al. paper were motivated by timing and opportunity. It was crafted to advance an agenda, not climate science.


Anthony Watts is a senior fellow for environment and climate at The Heartland Institute.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

140 Comments
Inline Feedbacks
View all comments
Neville
May 11, 2019 5:44 pm

Thanks Nick above. But if I choose to go back just 1 year to 2002.1 and end at 2019.5 we find that UAH V6 is back to 0.013 c dec and RSS V 3 is just 0.090 c dec. I think ENSO changes are leading the charge.
This is using the York Uni tool .

http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html

May 11, 2019 6:23 pm

“Oddly, the study didn’t compare two other long-standing satellite datasets”

Those measure the troposphere, not the surface.

“That’s an indication of the personal bias of co-author Schmidt”

No, but this and other sentences reflect the common bashing of scientists that is central to online climate science criticism.

“who in the past has repeatedly maligned the UAH dataset and its authors because their findings didn’t agree with his own GISTEMP dataset”

The result of this, if you’ll recall, was UAH being adjusted to look more like GISS, similar to today, though it was a long process. At one point in time UAH suggested the troposphere was cooling, and supporters were certain it was the surface record that was wrong. (Remember the days when it was all about proving that the world was not actually warming?)

“HadCRUT4 shows the lowest temperature increase, one that’s nearly identical to UAH”

“GISTEMP has greater polar coverage than MLOST or HadCRUT4, in part due to the inclusion of Antarctic ‘READER’ stations and in part due to the interpolation method”
https://climatedataguide.ucar.edu/climate-data

“Critics of NASA’s GISTEMP have long said its higher temperature trend is due to…”

(the same thing internet critics say is behind every scientific observation or result they don’t like.)

“there has been rapid cooling from that El Niño-induced peak in 2016”

Indeed. Has “No warming since 2016!” been officially endorsed as a talking point? How soon after 1998 was it before “No warming since 1998!” started up across the main channels? That had a pretty effective run for a talking point so seems worth a similar playbook. I’d think Heartland has the most expertise on making these calls…

May 11, 2019 6:51 pm

“Why would Schmidt, who bills himself as a professional climatologist, break with the standard 30-year period? It appears he did it because he knew he could get an answer he liked, one that’s close to his own dataset, thus “confirming” it.”

Bingo!

Excellent article Anthony!

SAMURAI
May 11, 2019 6:56 pm

According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed.

Since UAH6.0 started 40 years ago, the actual global warming trend is at 0.13C/decade, which is almost half of AIRS’ trend of 0.24C/decade.

The PDO, AMO, AOO are all about to (or about to) start their 30-year cool cycles, the next La Niña cycle should be a strong one, and a 50-year Grand Solar Minimum just started.

All these global cooling phenomena will very likely cause significant global cooling, which will finally put an end to one of the biggest and most expensive Leftist Hoaxes in human history…

“Truth is the daughter of time.”

Anthony Banton
Reply to  SAMURAI
May 12, 2019 12:55 am

“According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed.”

Why is it so difficult to realise that EN’s push the GMST trend (though UAH is not the surface) up and LN’s push it down?
This on top of a general warming trend.
There is no law that says which came first, an EN or a LN.
And saying which one to start/end with, does not answer any questions about the long-term trend.
The long term trend is chosen precisely because it eliminates short-term NV – and yes that also covers the period of the ’98 EN which was followed by a long period of -ve PDO that suppressed EN’s.

Reply to  SAMURAI
May 12, 2019 10:09 am

“According to UAH6.0, there hasn’t been a discernible global warming trend since mid-1996, if the 2015/16 Super El Niño event is removed”

(a) This isn’t true, (b) you can’t just adjust the warmth out and say that shows there wasn’t warming – that’s circular reasoning. For example, why if the 1998 and 2016 El Ninos were similarly sized (according to ENSO metrics) did 2016 end up ~0.2°C warmer?

Yes, UAH is among the lowest warming trends, but it shows warming (as you appear to know). It also shows progressive cool bias relative to RATPAC (radiosondes) since 2000 or so, so skepticism about which troposphere record is correct is reasonable.

A lot of PDO cooling in the past 30 years, hasn’t slowed things up any. The problem is physical – shifting of wind and ocean currents can’t powerfully affect global temperatures because it is mainly just sloshing heat around. Can’t, for example, heat the global oceans to 2,000m or reverse the (radiative) heating that is happening there.

“Truth is the daughter of time.”

Indeed. And what truths have kept getting reinforced over time so far? For how many years or decades now have those who aren’t happy with what’s happening predicted imminent global cooling?

Darren Potter
May 11, 2019 6:58 pm

“One of the co-authors is NASA’s Dr. Gavin Schmidt,”

No need to read any further. Wasted spending on Alarmism Politics.

Frank
May 11, 2019 9:39 pm

Andy wrote: “As you can see in Figure 1 above, there has been rapid cooling from that El Niño-induced peak in 2016, and the global temperature is now approaching what it was before the event. Had there not been an El Niño event in 2015 and 2016, creating a spike in global temperature, it is likely Schmidt wouldn’t get a “confirming” answer for a 15-year temperature trend. As you can see in the figure above on Panel B, the peak occurred in early 2016, and the data trend before that was essentially flat.”

Andy, blaming the trend on the 2015/6 El Nino is nonsense. If you start with the 2001 to 2012 period, there is a negligible warming trend. As soon as you start adding years after 2012, you get a rising trend. The problem is that the average temperature since 2014 has been about 0.2 K higher, even if you ignore the peak warming associated with the El Nino. If you replaced the temperature for six months before and after 1/2016 with the average temperature since 2014 outside this period, you will still have a large warming trend. There is a rapid increase of almost 0.2 K being obscured by the dramatic El Nino. Almost every month since the El Nino has been warmer than the warmest months of 2001-2012 (the Pause) and on the average outside the El Nino they have been about 0.2 K higher.

Richard M
Reply to  Frank
May 12, 2019 5:49 am

Frank, is there a warming trend from 2001-2012? There is a cooling trend in the UAH data. In fact, the trend from 2001-2015 is down slightly and that is when the El Nino effects started to be felt.

http://www.woodfortrees.org/plot/uah6/from:2001/to:2015/every/plot/uah6/from:2001/to:2015/trend

Yeah, I know it depends on what data you use. But, it looked like the AIRS data matches UAH pretty closely.

Frank
Reply to  Richard M
May 12, 2019 11:09 pm

Richard: Yes, it does make a difference which temperature record you use and precisely when you start and stop. However, my comment was directed towards Andy’s statement that cooling since the 2015/6 El Nino has returned temperature to the temperature of the Pause period. That is grossly incorrect. Since the El Nino ended in late 2016, the average temperature has been about 0.2 K higher than during the Pause.

John Robertson
May 11, 2019 9:47 pm

Does NASA’s Latest Study Confirm Global Warming?”
Does a bear sh** in the woods?
Of course it does and if the numbers fail to support the claimed warming,they will be adjusted until they do.
Policy based evidence manufacturing is a serious business, in the modern bureaucracy.

Until this Agency is reset,they will support the narrative regardless of the actual evidence.

The CAGW meme is clear evidence our bureaus have declared war on the tax paying citizen.
Serving a higher purpose?
Mass firings are the only cure, the only way to explain to these fools that government is not a place for religious zealots.

By using their positions to force change upon the citizen,they have destroyed the institutions.
Institutions they corrupted to serve the “cause”,demonstrating their institute has no useful function.

The corruption this mass hysteria has revealed,proves once again how dangerous government is.
Big government will consume until all resources are extinct.
Taxing Air.

Reply to  John Robertson
May 12, 2019 10:13 am

“The corruption this mass hysteria has revealed”

meaning

“The (secret global collusion and) corruption that we REPEATEDLY insist exists reveals that we do not have to accept observational science if we do not wish to.”

Richard M
May 11, 2019 10:08 pm

As many have indicated this is just plain nonsense. It is measuring noise. What is key is the very same people who used to complain about the pause being due to having a super El Nino at the beginning of the trend now are perfectly happy to use a super El Nino at the end of the trend.

This alone shows how dishonest the entire climate cult has become.

Reply to  Richard M
May 12, 2019 2:18 pm

Do you have an example where “these people” (scientists) do what you claim (dishonestly count the El Nino at end and not the one at the beginning?)

The actual claim that I hear them state is that you should *either* (a) leave ENSO effects in the data consistently, or (b) if you want to remove ENSO statistically (as a way to analyze the data) do it consistently to both El Nino and La Nina.

In my experience, it is only critics (on sites like this) which argue that you should remove only the 2016 El Nino and nothing else, in order to argue the pause has continued. Which (if true) “alone shows how dishonest the entire climate cult has become”, does it not?

In general, your comments sound like you think the online hypothesis from science critics, that the “pause implies global warming has stopped”, has been unfairly falsified by the real world data because you shouldn’t accept the El Nino at the end. We’ve had La Nina since then though, and temps are still higher than ‘pause’ levels a decade earlier.

What you allege certainly isn’t what is happening here. They are just comparing the AIRS record to the GISS record.

Richard M
Reply to  Geoff M Price
May 16, 2019 5:30 am

Geoff, you are living in a fantasy world. The La Nina events were minimal and likely had very little impact on global temperature. Even NOAA admitted the warm water from the super El Nino hung around until 2018.

That means we never really had a chance to completely return to the baseline before the current El Nino started last September.

I agree that all ENSO effects should be removed from the data. When that is done (along with volcanoes), there’s been no warming in 20 years and only .25 C since 1980.

DocSiders
May 11, 2019 10:09 pm

It’s all ENSO and noise. None of the models show stepwise GAT’s. They show steady trends from a forcing plus assumed but disproven amplification.

ren
Reply to  DocSiders
May 11, 2019 11:38 pm

What is happening with the sea surface temperature in the southern hemisphere and how does this affect ENSO?
comment image

tom0mason
May 12, 2019 12:04 am

It’s not like there is no ice at the Arctic …
comment image

Quite a lot considering how long it’s been since we’ve left the LIA.
With only around 1°C warming since the end of the LIA, what are these numbskulls thinking of, surely it should be warming-up?
‘Climate Science™’ of unbounded arrogance and hubris.

ren
May 12, 2019 12:06 am

The Earth is trying to maintain a favorable temperature, but the Sun is stronger.

Phoenix44
May 12, 2019 2:03 am

Calculating a decadal trend using 15 years of data makes no sense. And why do we use these trends over arbitrary periods anyway? The data may show warming but if they do they show that it is no way smooth or even or predictable. If anything the data shows step changes followed by declines or plateaus. That doesn’t fit very well with the CO2 claim though so instead we see these claims about trend warming that are wholly subject to cherry-picking, arbitrary smoothing and arguments about noise. In other words, a total lack of both clarity and rigour. And all because most of those studying the data are determined to fit it to their preconceived view if his the data should look.

Dave
May 12, 2019 7:20 am

Everyone concedes the planet is warming. That isn’t the question.

The questions begin with, how much effect does man have? How much do we contribute to the current rise?

The more important question is what role does CO2 play? If any? Considering that man’s contribution to atmospheric CO2 is less than 100 PPM, is that enough to have any affect on the climate? Does CO2 really have any feedbacks? Are they positive or negative or insignificant?

Finally there is the question of what should man do about. Take drastic action now without knowing the answers to the above questions. Or wait and remediate?

May 12, 2019 8:26 am

Earth’s average temp has never been constant — it’s either rising or falling. OK, assume right now it’s rising. So?

Jim G
May 12, 2019 9:26 am

Slyentist: Noun
A scientific expert who makes data conform to the support of their hypothesis.

Data used is often unavailable and results are only replicated by other slyentists.

Kaiser Derden
May 12, 2019 12:03 pm

could someone please define “climate” by some sort empirical measure ? more “bad” weather is not measurable unless “bad” is defined … (same with severe …) … the only 2 climate extremes I see are temperature based … 1) snowball earth … and 2) not snowball earth with a thriving biosphere … we seem to have had severe weather going from one to the other in either direction …

If the warmists are claiming that the “climate change” we will see is more hurricanes, droughts, (or floods) or higher temperatures … then the record of the 20th century says we are only seeing somewhat higher temperatures (up to 1938 … lower since then) … so by any measure the “climate” has gotten milder not more severe …

GregK
Reply to  Kaiser Derden
May 12, 2019 5:57 pm

Try this for size…..it outlines about 16 major climate types

https://www.nationalgeographic.org/encyclopedia/climate/
Then there’s local variation

So “Earth’s Climate ” ?

tom0mason
Reply to  Kaiser Derden
May 13, 2019 5:01 pm

Climate is how it was.
Weather is what you got.
And forecasts are what is wanted.

Forecasts are made by the person paid to make it.

May 12, 2019 12:15 pm

The satellite temps aren’t better than ±0.3 C.

The surface station data aren’t better than ±0.5 C (that’s a well-sited well-maintained unaspirated USHCN station).

And Gavin, & co are yodeling about a 0.24 C change across 15 years.

Let’s see: they’re claiming to resolve an average 0.016 C annual change against ±0.3 C resolution.

The uncertainty is 18.8 times larger than the signal.

What_a_crock.

I’ve talked with Roy Spencer about the satellite uncertainty, by the way. He agrees with that ±0.3 C number (it comes from his work with John Christy). But like everyone else in the field, Roy thinks that taking anomalies subtracts away all the error and uncertainty.

It’s too funny.

The whole field lives on false precision. And then they have solemn discussions about the oracular meaning of it all.

Reply to  Pat Frank
May 12, 2019 1:50 pm

Frank, what is “too funny” is how you confuse an individual measurement of a physical temperature, with a statistical estimator of GAST. Classic apples versus oranges. Not only that, but Roy is correct in that anomalies erase your often touted “systemic error.”

Flavio Capelli
Reply to  Mike Borgelt
May 12, 2019 8:00 pm

“an individual measurement of a physical temperature, with a statistical estimator of GAST”

And there lies the rub. Can a statistical estimator have smaller standard error than error on individual measurements? If some conditions are met, yes, it can. But it’s wrong to assume it always does.

My first job was in an ISO17025 calibration laboratory, and that environment is rather fastidious when it comes to determination of uncertainties. The lesson I took home is that you should never assume the best case scenario unless you can prove it.

Again, using anomalies will remove some of the errors, but measurement error never stops propagating.

Reply to  Flavio Capelli
May 14, 2019 3:42 pm

Flavio, there is no possible way to make or have an individual measurement measure GAST (Global Average Surface Temperature.) The only possible way to make such a measurement is with a statistical estimator.

Reply to  Mike Borgelt
May 13, 2019 9:37 am

Let’s see your demonstration that systematic error is removed from satellite temperatures by differencing, Mike.

Let’s also see you disprove the standard statistical propagated error of a difference, namely that for a-b=c, with error e_a & e_b, the uncertainty in ‘c’ = sqrt[(e_a)^2 + (e_b)^2] .

Judging from your post, you don’t know what you’re talking about.

Systematic error is only known to subtract away when the error magnitude itself is known and is known to be constant.

That’s not the case with satellite temperatures, and is not the case with unaspirated USHCN stations.

Reply to  Pat Frank
May 13, 2019 10:16 am

Roy Spencer is correct Mr. Frank…..go argue with him.

Reply to  Mike Borgelt
May 13, 2019 11:47 am

So you abandon your claim of knowledge, Mike. You’re just making an argument from authority.

I have spoken with Roy about the systematic error problem. He just shrugged it off.

They just assume the error is constant and disappears on differencing. The assumption is methodologically unjustified.

Reply to  Pat Frank
May 13, 2019 10:28 am

Frank…..satellites don’t measure temperature, so your argument’s about propagating error(s) is invalid. Apples and oranges sir.

Reply to  Mike Borgelt
May 13, 2019 11:50 am

Satellites measure radiance, Mike, which is converted into temperature. Errors in radiance convert into errors in temperature. Those errors get propagated.

The logic is coherent throughout, Mike.

If the errors could not propagate through the calculation, the calculation would be logically discontinuous; a fatal problem to any branch of science.

Reply to  Mike Borgelt
May 13, 2019 2:03 pm

The conversion is based on a model Frank, not on anything else. Satellite radiance-temperature models are just another example of GIGO.

Why don’t you explain to all of us the relationship between radiance error and temperature error? Is it even linear, or non-linear?

I’ll trust Roy Spencer on this more than you.

Reply to  Mike Borgelt
May 13, 2019 4:42 pm

Mike, Roy’s satellite method uses radiance to derive temperature.

You call that GIGO. Fine.

Reply to  Mike Borgelt
May 14, 2019 12:18 pm

I’ll repeat my question to you Frank, you didn’t answer it…
.
Why don’t you explain to all of us the relationship between radiance error and temperature error? Is it even linear, or non-linear?

Frank
Reply to  Mike Borgelt
May 14, 2019 2:10 pm

Mike Borgelt claims: “satellites don’t measure temperature … The conversion is based on a model Frank, not on anything else. Satellite radiance-temperature models are just another example of GIGO.”

Well Mike, what is temperature? There are at least two technical definitions: one from thermodynamic based on entropy and one from the kinetic theory of gases (temperature is proportional to the mean kinetic energy of a large group of colliding molecules).

What does a traditional mercury thermometer measure? The thermal expansion of mercury – not temperature. There is a linear model and error in converting expansion to temperature.

What does a thermocouple measure? Electricity current, not temperature. There is a linear model and error in converting current to temperature.

What does an infrared thermometer (used in the human ear canal) measure? The RADIANCE of thermal infrared photon emitted by the skin arriving at the surface of a detector. There is a model (Planck’s Law) and error in converting current to temperature.

You can measure the temperature of an enclosed gas by measuring its pressure. There is a model and error in converting pressure to temperature.

You can, as Galileo did, measure temperature by density. There is a model and error in converting current to temperature. You can buy one of these (the glass balls with colored liquid floating or sinking in an oil in curiosity shops. There is a model and error here too.

The microwave sounding units on satellites are another valid way of measuring the average temperature in a section of the atmosphere. There is a model (Planck’s Law) and error in converting these radiances to temperature. The results agree with the thermocouples carried aloft by radiosondes. There are massive technical problems dealing with drifting satellites and challenges dealing with aging MSUs.

The real GIGO problem is with the ignorant garbage we put in our minds, and the garbage that comes out when we communicate. If the only things we retain are information from unreliable sources that agrees with our biases and deeply held beliefs, then what comes out has nothing to do with science. Scientists are required to confront the problem of confirmation bias.

Reply to  Mike Borgelt
May 14, 2019 3:32 pm

Frank (not Pat Frank) says: ” The microwave sounding units on satellites are another valid way of measuring the average temperature in a section of the atmosphere.”

Your problem Frank is that microwave sounding units do not measure temperature. Plank’s law does not deal with
microwave brightness, it deals with spectral density You know full well that
the the emissions from oxygen molecules is not a “black body” which Plank’s law
applies to. Plank’s law is not linear, and therefore the error in microwave brightness
measurements is not linearly related to the error in the temperature calculated from this
measurement.

You say: ” There are massive technical problems dealing with drifting satellites and challenges dealing with aging MSUs.” and I
wholeheartedly agree with you. Mr. Pat Frank seems to think that he has a handle on the
error bounds of these satelite measurements, when in fact he hasn’t a clue. I trust what
Roy Spencer says about this, much more than what Pat Frank says.
..
The GIGO comes from the multitude of parametric fudge factors UAH and RSS “models” use to
convert the microwave readings into “temperature.” These fudge factors have been determined
by curve fitting the radiosonde readings with the data from the satellites.
..
Lastly, all of this satellite measurement then is used as an estimator for a GAST, which brings in
the two factors of the variance of the individual microwave reading versus the standard error term of the
estimator.

So, not only is a “model” that has a non-linear error propagation for genertating temps, then we have the fact
that said model was constructed by curve fitting to generate an statistical estimator for GAST. Then Mr. Pat Frank has the audacity to claim he knows more about the error in this than Roy Spencer.

Reply to  Mike Borgelt
May 14, 2019 4:21 pm

Your question is irrelevant, Mike. The conversation is about measurement error.

But FYI, here‘s Roy Spencer’s explanation of method. It is he you should have asked concerning method, since it’s his view you tried to defend.

Reply to  Mike Borgelt
May 14, 2019 5:24 pm

Pat Frank says: “The conversation is about measurement error.” ……..Yes it is.

My question is: “Is it even linear, or non-linear?”

You say: “Your question is irrelevant”………No, it’s absolutely relevant.
….
You claim: “Errors in radiance convert into errors in temperature.”……….good, so you know about the relationship.
….
So answer the question, since you know all about it………is the conversion linear, or non-linear?

Reply to  Mike Borgelt
May 14, 2019 5:29 pm

Mr. Pat Frank, the chemist that thinks he knows all about measurement error cannot answer the simple question: “What is the relationship between the measured error by satellites of radiance and the derived temperature error? Is it linear, or non-linear?” Got a formula?

Frank
Reply to  Mike Borgelt
May 15, 2019 4:14 pm

Mike Borgelt complained about this statement: ” the true error in any given measurement of the field instrument is unknown.”

“The precision and the accuracy of the temperature sensors is known. In fact you can obtain a calibration trail for each.”

Sure, but these are laboratory assessments. The reading of AIR temperature by a sensor can be changed by conditions in the field: Direct sunlight, wind, shadows, height above the ground, nearby sources of heat or cooling, the nature of the ground: grass, dirt, blacktop. On a sunny, calm day, air adjacent to the ground can be tens of degC higher than the air around your head. Colder air sinks into local hollows on still nights. Until we learned to put thermometers in adequately ventilated enclosures a specified height above the ground with low vegetation and exposed to direct sunlight far away from buildings with a significant heat capacity, measurement of air temperature was an irreproducible process. Even changing from reading a min/max thermometer in the morning to the evening has a significant effect.

Reply to  Mike Borgelt
May 15, 2019 4:59 pm

Yes Frank, that is why climatologists use anomalies instead of absolute readings from any given site to measure changes in climate. By using anomalies, it eradicates most of the issues you bring up.

Frank
Reply to  Mike Borgelt
May 15, 2019 6:56 pm

Mike Borgelt wrote: Yes Frank, [the effect of station siting on temperature readings] is why climatologists use anomalies instead of absolute readings from any given site to measure changes in climate. By using anomalies, it eradicates most of the issues you bring up.

Nevertheless, when the data from nearby stations is compared, essentially all station records show mostly undocumented discontinuities/breakpoints in the record that allegedly are absurdly unlikely to have occurred by chance. Many records show breakpoints averaging once every decade! These breakpoints are hypothesized to represent step-function changes in systematic error and are corrected. While correction sometimes warms and sometimes cools the present compared with the past by large amounts (0.5 degC is not unusual), the net result is to add 0.2 degC of warming to the overall 20th-century land record. So systematic errors are a non-trivial issued.

Frank
Reply to  Pat Frank
May 13, 2019 11:57 am

Pat: Your formula for standard statistical propagated error is derived based on the ASSUMPTION that e_a and e_b are random noise. When talking about systematic error, there is a constant relationship between the error terms! That is why the word SYSTEMATIC is used. You can not use this formula to analyze systematic error.

For simple random error in a linear process, we say:

y_i = m*x_i + b + e_i

But for systematic errors, every measurement of y is off the same amount e_s, the systematic error:

y_i + e_s = m*x_i + b + e_i

Mathematically, when you do a least-squares fit to this data, the systematic error only biases the y-intercept (b), not the slope/trend (m):

y_i = m*x_i + (b + e_s) + e_i

That systematic bias doesn’t need to be present in every measurement. Suppose we are talking about rising temperature (y) vs time (x). If 1/4 of days are calm and there is a CONSTANT UHI bias on calm days of (e_s) AND the fraction of calm days doesn’t change with time, the end result will be:

y_i = m*x_i + (b + e_s/4) + e_i

The trend will be unchanged by e_s.

You can certainly construct an artificial dataset with these properties and prove this for yourself.

Reply to  Frank
May 13, 2019 4:46 pm

Systematic means deterministic, Frank. It doesn’t mean constant.

When systematic error is due to uncontrolled variables, as is true for all the USHCN earth stations, and very likely so for the satellite temperatures, then the error is both not constant and of unknown magnitude.

The propagation formula is appropriate.

You’re making the standard folk-tale argument of climate modelers. They have no concept.

Frank aka A Different Frank
Reply to  Frank
May 14, 2019 3:03 pm

Pat: Trend assessment by OLS is based on the ASSUMPTION that the error term (e_i above) is randomly distributed and has a mean of zero. Right?

When you have a constant systematic error as mathematically formulated above, I’ve provided the correct math. Right?

Now, I believe you are saying that my systematic term s is not truly a constant, that perhaps it should be written as a noisy systematic error (s + s_i), or a partially time-dependent noisy systematic error (s + s(t) + s_i). The s_i terms add to the e_i terms and become the typical noise that we already know how to deal with. Right?

The constant systematic error term s effects the y-intercept, but not the trend. Right?

If so, I’ll enthusiastically agree with you that a time-dependent systematic error s(t) is a real problem. I don’t know whether the term “systematic error” implies a constant s or a time-dependent s(t). Math is a more precise language than English. And I’ll be glad to agree that we don’t know the relative sizes of s and s(t). I’ve repeated said that a constant bias doesn’t change the trend, it is only a changing bias that matters. Andy’s surface station project is innately flawed because a changing bias can’t be detected from the quality of today’s station site.

The guys doing breakpoint correction are assuming that s(t) is constant except for discrete discontinuities at certain points in time. However these discontinuities could be biases that grow with time and are suddenly corrected by station maintenance.

A different Frank
Reply to  Pat Frank
May 13, 2019 8:58 pm

Pat: Your formula for standard propagated error is only valid when e_a & e_b represent random noise. By definition, systematic error is not random!

Let’s consider extracting a trend for temperature (y) vs time (x) data of the form

y_i = m*x_i + b + e_i

We do a least squares-fit and everything turns out right as long as the e_i terms are randomly distributed with a mean of zero. Now let’s add a systematic error which makes all readings too high by s, a constant systematic error independent of i and x.

y_i + s = m*x_i + b + e_i

After performing a least-squares fit, we get the same value for the slope/trend (m) and a systematically biased value for the y-intercept equal to b-s. When we are dealing with temperature anomalies and only care about the trend, a constant systematic error doesn’t interfere with finding the correct trend. The error doesn’t even have to be constant. If on calm days, UHI increases the temperature by s degrees and not at all on windy days, and if 1/4 of the days are windy throughout the entire period, linear regression will provide the correct slope and a y-intercept of b – s/4. Systematic errors of this type never interfere with calculating an accurate trend as long as they are constant over the entire interval. If you don’t believe me, try it with some pseudo data whose properties match those described above.

Reply to  A different Frank
May 14, 2019 3:38 pm

Thank you, Mr. “A different Frank”
..
You have explained why I posted ” Roy is correct in that anomalies erase your often touted “systemic error.” ”

( https://wattsupwiththat.com/2019/05/11/does-nasas-latest-study-confirm-global-warming/#comment-2700859 )
..

Reply to  A different Frank
May 14, 2019 4:15 pm

Your formula for standard propagated error is only valid when e_a & e_b represent random noise.

Not correct. Propagation is recommended for use to determine the uncertainty for any repetitive appearance of systematic error. Read through Section F. 2.4.5 here.

In the case of the satellite temperatures, and of the USHCN stations, the true error in any given measurement of the field instrument is unknown.

All one can do is estimate the range systematic error by way of calibrations under conditions that duplicate the field. The calibration uncertainty is then applied to measurements obtained from the field instrument.

That uncertainty is propagated in the usual way, when measurements are combined, averaged, differenced, etc. However, as the errors are non-random, the uncertainty never, ever averages away.

You can find a very useful set of definitions and uses of error from a physics perspective, here.

Notice that the final entry, the “law of propagation of uncertainty” does not limit the propagation to random error, but rather indicates application to uncertainty as a general case.

Reply to  A different Frank
May 14, 2019 6:22 pm

” the true error in any given measurement of the field instrument is unknown.”

Pat Frank shows he is ignorant. The precision and the accuracy of the temperature sensors is known. In fact you can obtain a calibration trail for each.

Another Frank
Reply to  Pat Frank
May 15, 2019 3:45 pm

Pat: Thank you for the links to definitive sources of information about statistical uncertainty. However, as best I can tell, all of these sources are referring to the uncertainty in one quantity. We are dealing with a DIFFERENT PROBLEM, calculating a trend from many measurements. So these links are irrelevant to the problem of calculating temperature trends.

To the best of my knowledge, each day’s high and low is the result of a single measurement. All of the measurements for each month are averaged. We enter the average for each month into a linear regression without taking into account the standard error of the monthly mean. We can do this because each monthly mean analyzed by a linear regression is assumed to have an error (e_i) that can arise from BOTH random error (standard error in the monthly mean) and/or random deviation from a linear relationship between x and y (or time and temperature, if you prefer). In a linear regression, we assume there is no systematic error because the standard assumption is that the mean of e_i is zero and that e_i is randomly distributed about zero.

If there is a systematic error in each monthly, it can be dealt with as I described above, by including a constant systematic error (s), or a noisy systematic error (s+s_i) or a time-dependent noisy systematic error (s + s(t) + s_i).

If there is a constant standard error, the note in your first link advocates subtracting it from each y (temperature). This is what is done when temperature records are homogenized by hypothesizing that undocumented breakpoints are caused by a step-function change in systematic error. Without documentation, there is no way to test this hypothesis. An alternative hypothesis is that breakpoints are caused by systematic error that increases with time and is abruptly corrected by maintenance.

Mike Haseler (Scottish Sceptic)
May 13, 2019 12:47 am

NASA climate was basically set up and the people selected by 5x arrested eco nutter Hansen. So, there’s no doubt that everyone there is a dyed in the wool eco-nutter who is spending all their time trying to “prove” something that is unprovable because it’s not happening.

Jim Whelan
May 13, 2019 8:20 am

“with the author of the article adding, ‘New evidence suggests one of the most important climate change data sets is getting the right answer.’ ”

And there’s the problem. Good data IS the right answer. When you see the purpose of data gather as being to “get the right answer” then you are turning the scientific method on its head.

P. Berberich
May 13, 2019 1:58 pm

My analysis of “CERES_SSF1deg-Month_Terra-MODIS_Ed4A_Skin temperature 200003-201809.nc”: 12 months mean: 0.12 +/- 0.06 °C/Decade

Chris Hoff
May 14, 2019 10:08 am

Does this mean I can use any recent El Nino peak as starting point, run my graph until before the next one starts and claim it proves the world is cooling?

DDP
May 15, 2019 6:10 am

-A Washington Post headline read, “Satellite confirms key NASA temperature data: The planet is warming — and fast,” with the author of the article adding, “New evidence suggests one of the most important climate change data sets is getting the right answer.”-

Pretty much says everything right there, tons of confirmation bias regarding the adjusted data giving the “right answer”, to denial of the industrial era being at the tail of a rapid cooling and the temperature rise is largely a rebound.

May 18, 2019 5:16 am

Much the most interesting part of the new NASA report seems to be Figure 2, showing Zonal differences
in warming, 2003-2017. This shows a sharp fall in temperature in the most southerly latitudes and a
sharp rise in the most northerly, 80 to 90 degrees north. The rest of the world, presumably 80 to 90
per cent of it, shows hardly any change. Surely this means it is wrong to talk about GLOBAL warming;
it is only Arctic warming.
Is there a publication which shows the WEIGHTS of the different latitudes and continents in the
published global figures?