UAH finds a warming error in satellite data, lowers "tropical hotspot" temperature trend, contradicts IPCC models

From the University of Alabama, Huntsville via email from Dr. John Christy.

Weather Satellite Wanders Through Time, Space, Causing Stray Warming to Contaminate Data

In the late 1990s, the NOAA-14 weather satellite went wandering through time and space, apparently changing the record of Earth’s climate as it went.

Designed for an orbit synchronized with the sun, NOAA-14’s orbit from pole to pole was supposed to cross the equator at 1:30 p.m. on the sunlit side of the globe and at 1:30 a.m. on the dark side, 14 times each day. One of the instruments it carried was a microwave sounding unit (MSU), which looked down at the world and collected data on temperatures in Earth’s atmosphere and how those temperatures changed through time.

By the time NOAA-14 was finishing its useful life in 2005, however, it had strayed eastward from its intended orbit until it was crossing the equator not at 1:30 but at about 8:00. That pushed its early afternoon passage until after dark and its middle of the night measurements until well after dawn.

Diagram showing wander orbit of NOAA-14. Credit: University of Alabama Huntsville

Because local temperatures typically change between 1:30 and 8:00, this introduced spurious temperature changes that had to be calculated and removed from long-term temperature datasets that use data from satellite instruments.

The drift also changed the satellite’s orientation relative to the sun, so that instead of instruments being shielded from sunlight in a consistent way, the sun’s rays peaked into unshielded and open crevices and other places where intense sunlight could influence the sensors. It warmed the MSU, which caused it to look at the atmosphere’s mid-troposphere (from the surface up to about 40,000 feet) as very slightly warmer than it actually was relative to its initial 1:30 crossing time.

Using data from weather satellites that stayed closer to home than NOAA-14, scientists in the Earth System Science Center (ESSC) at The University of Alabama in Huntsville (UAH) have calculated how much false warming NOAA-14 reported so the false warming could be removed from a long-term global atmospheric temperature record collected by MSU’s on satellites since mid-November 1978.

Details of that research have been published in the International Journal of Remote Sensing, and are available online at: https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1444293

The wandering satellite became an issue when Dr. John Christy, director of UAH’s ESSC and lead author of the study, and Dr. Roy Spencer, an ESSC principal research scientist, were updating and revising UAH’s satellite-based global temperature dataset. (Version 6.0 was completed in 2016.) While they knew NOAA-14 had strayed from its path, a closer look showed the warming reported by the MSU on NOAA-14 was out of kilter with temperature data collected by instruments on other NOAA satellites. This seemed to be especially true in the tropical mid-troposphere.

NOAA-14 was “drifting more than any other spacecraft used in this dataset,” said Christy.

“We were looking at 39 years of a temperature trend, and this stray satellite affected the trend by about 0.05 degrees Celsius (about 0.09° F) per decade,” said Christy said. “Over 39 years, that would be a total warming of about 0.2 C, or more than one-third of a degree Fahrenheit. And this problem occurred, almost all of it, in the 1990s and the early 2000s.

“An important piece of evidence pointing to a problem with the NOAA-14 satellite was its warming relative to the new NOAA-15 satellite that came in at the end of the 1990s,” Christy said.

To measure the scale of the problem after the UAH satellite dataset was finalized, the ESSC team started with a subset of U.S. weather balloons that hadn’t changed either instruments or software during at least a major part of the period NOAA-14 was in orbit. Weather balloons are a useful tool for benchmarking against the satellite data because both collect temperatures from the surface up through deep layers of the atmosphere.

When the U.S. balloons showed less warming than the MSU on NOAA-14, Christy expanded the study to include a group of Australian weather balloons that also hadn’t changed instruments during a major part of the time NOAA-14 was in orbit.

“This gave us two reputable datasets that are widely separated across the Earth’s surface,” Christy said. “Then we also looked at homogenized data from independent groups that correct balloon datasets.”

That data from NOAA, the University of Vienna and the University of New South Wales was added to data from three other groups — the European Center for Medium Range Forecasts, the Japan Climate Research Group and NASA — that “reanalyze” global weather data, correcting for flaws and problems.

UAH even created its own homogenized balloon dataset from raw balloon data archived by NOAA, which came from 564 stations around the world.

“We tried to understand the situation by inter-comparing against as many individual, independent datasets as possible,” Christy said. “We know no dataset is perfect, so comparing against various sources is a key part of dataset analysis. This allows us to zero in on places with the greatest discrepancy. We found the largest difference between the UAH dataset versus other satellite datasets — and even some with balloons — was found in the 1990s and early 2000s, the period of NOAA-14.”

And NOAA-14 was showing more warming than any of the balloon datasets, as well as more warming than NOAA-12 or NOAA-15, each of which overlapped NOAA-14 at some point during its time in service. They also compared temperature data from NOAA-15 to data from NASA’s orbit-stabilized AQUA satellite.

When the UAH team, led by Spencer, Christy and W. Daniel Braswell, built the Version 6.0 UAH dataset in 2016, they serendipitously did two things that limited the drift’s influence.

“First, we stopped using NOAA-14 in 2001, when it had drifted to 5 p.m.,” Christy said. “In our view, that was just too much drift to have confidence an accurate correction could be found.”

They also applied an algorithm that minimized the differences between the satellites, largely removing the NOAA-14 drift relative to the other satellites.

This resulted in a long-term mid-troposphere warming trend in the tropics of about 0.082 C (about 0.15° F) from late 1978 to 2016. This compares well with the +0.10 C (±0.03 C) per decade trend found by other sources that weren’t exclusively satellite based.

The way UAH built its dataset and accounted for these issues is unique among the four major satellite temperature datasets. The other three datasets still include all of the NOAA-14 data and show warming trends greater than the trend shown in the UAH dataset.

Other satellite-only temperature datasets report tropical mid-troposphere warming trends ranging from +0.13 to +0.17 C per decade.

“Not realizing it at the time, the methods we used to build the newest dataset appear to have dealt with a discrepancy that came to light through this inter-comparison study,” Christy said.

Note: This is not the lower tropospheric data or long-term warming trend reported each month for more than 27 years in UAH’s Global Temperature Report. The lower troposphere extends from the Earth’s surface to an altitude of about eight kilometers (more than 26,000 feet).

Because most low Earth orbiting satellites tend to stray somewhat from their intended orbits, new NOAA polar-orbiting weather satellites scheduled for launch in coming years will carry the extra fuel needed to keep them closer to home throughout their time in space.

###


The paper is fully open-access and available here: https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1444293

Examination of space-based bulk atmospheric temperatures used in climate research

The Intergovernmental Panel on Climate Change Assessment Report 5 (IPCC AR5, 2013) discussed bulk atmospheric temperatures as indicators of climate variability and change. We examine four satellite datasets producing bulk tropospheric temperatures, based on microwave sounding units (MSUs), all updated since IPCC AR5. All datasets produce high correlations of anomalies versus independent observations from radiosondes (balloons), but differ somewhat in the metric of most interest, the linear trend beginning in 1979. The trend is an indicator of the response of the climate system to rising greenhouse gas concentrations and other forcings, and so is critical to understanding the climate. The satellite results indicate a range of near-global (+0.07 to +0.13°C decade−1) and tropical (+0.08 to +0.17°C decade−1) trends (1979–2016), and suggestions are presented to account for these differences. We show evidence that MSUs on National Oceanic and Atmospheric Administration’s satellites (NOAA-12 and −14, 1990–2001+) contain spurious warming, especially noticeable in three of the four satellite datasets.

Comparisons with radiosonde datasets independently adjusted for inhomogeneities and Reanalyses suggest the actual tropical (20°S-20°N) trend is +0.10 ± 0.03°C decade−1. This tropical result is over a factor of two less than the trend projected from the average of the IPCC climate model simulations for this same period (+0.27°C decade−1).


From the paper, here is the main conclusion:

One key result here is that substantial evidence exists to show that the processed data from NOAA-12 and −14 (operating in the 1990s) were affected by spurious warming that impacted the four datasets, with UAH the least affected due to its unique merging process.

RSS, NOAA and UW show considerably more warming in this period than UAH and more than the US VIZ and Australian radiosondes for the period in which the radiosonde instrumentation did not change. Additionally the same discrepancy was found relative to the composite of all of the radiosondes in the IGRA database, both global and low-latitude. While not definitive, the evidence does support the hypothesis that the processed satellite data of NOAA-12 and −14 are characterized by spurious warming, thus introducing spuriously positive trends in the satellite records. Comparisons with other, independently constructed datasets (radiosonde and reanalyses) support this hypothesis (Figure 10).

Figure 10. Magnitude of the relative difference between two periods for the respective satellite datasets (colored bars) and the respective radiosonde-based datasets (i.e. positive value indicates satellite warmed more than the radiosonde-based data between defined periods.).

Given this result, we estimate the global TMT trend is +0.10 ± 0.03°C decade−1. The rate of observed warming since 1979 for the tropical atmospheric TMT layer, which we calculate also as +0.10 ± 0.03°C decade−1, is significantly less than the average of that generated by the IPCC AR5 climate model simulations. Because the model trends are on average highly significantly more positive and with a pattern in which their warmest feature appears in the latent-heat release region of the atmosphere, we would hypothesize that a misrepresentation of the basic model physics of the tropical hydrologic cycle (i.e. water vapour, precipitation physics and cloud feedbacks) is a likely candidate.

Note that this addresses the “Tropical hotspot” of models, and is a much lower trend than they show in figure 1 from the paper, shown below, which has values 0f 0.4 to 0.6 °C per decade compared to Christy’s result of 0.10 °C per decade.

Figure 1. Latitude – Altitude cross-section of 38-year temperature trends (°C decade−1) from the Canadian Climate Model Run 3. The tropical tropospheric section is in the outlined box.

NOTE: the first version of this post had an error in the title, and was missing the figure 1 graphic above. This happened due to a versioning error on my part. It has been corrected as well as the original Tweet removed and resent.

0 0 votes
Article Rating
129 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
MarkW
April 6, 2018 11:34 am

No doubt some troll will drop by soon to proclaim that since we accept these adjustments, we must accept all adjustments.
The answer to that is no, we only have to accept “adjustments” when the reason and the math behind the adjustments are fully produced and we agree with both.
When the alarmist team produces their methods so that they can be critiqued, then it will be reasonable to demand that we accept their adjustments as well.

Wayne Townsend
Reply to  MarkW
April 6, 2018 11:38 am

+1

Tom Halla
Reply to  MarkW
April 6, 2018 11:43 am

Agreed. One should only do adjustments when one has a clear, reported, reason for doing so.

oeman50
Reply to  Tom Halla
April 7, 2018 9:47 am

And we must retain and have access to the raw data so it is transparent what adjustments have been made.

thomasjk
Reply to  Tom Halla
April 7, 2018 12:12 pm

And that does not include when adjustments are needed to be able to get the “correct” temps that are in agreement with the model calculated temperatures.

Joel Snider
Reply to  MarkW
April 6, 2018 12:20 pm

Considering this was all settled science three decades ago, the simple fact of all these constant adjustments ought to be a red flag.
Seems to me, it’s a constant struggle by warmists to prove any degree of warming at all – I guess that’s why we’re measuring in hundredths of degrees these days – let alone to assign these catastrophic effects to it
Talk about a ‘slow motion tornado’.

Reply to  Joel Snider
April 6, 2018 1:24 pm

When I was an engineer we used NBS traceable instruments with an accuracy of 1/10 of a percent. That meant, back then, that it was accurate to 1/10 of the FULL range of the instrument. If the instrument had a 100 degree, or 100 pound, range then the accuracy was just 1/10 of 1% 100 pounds or +/- 0.1 pounds, degrees, etc.. Many of these instruments were connected to electronics that had readouts to 3 decimal points. All but the first decimal point were WORTHLESS. Why do these so called scientists continue to use an instrument orders of magnitude beyond their accurate range.
If you are measuring atmospheric temperature you will be using an instrument that has a range of at least 150 (-25 to + 125) or more likely 200 degrees (-50 to + 150) and probably even more than that. For an instrument to have an accuracy where the third decimal point reading is meaningful would require an accuracy of 1/1000 of a percent. and even then the reading is only +/- 0.002 degrees. To assume that third decimal is accurate would require an accuracy of 1/10,000 of 1%, placing the +/- error into the fourth, un-shown, reading. Are they now making instruments that accurate? Do all Climate Scientists have a problem with math?
Further, I read that the ARGO Buoys are accurate to ± 0.002°C. How is that physically possible? Someone is feeding the government and Climate Scientists a pile of cow manure. To acheive that accuracy would require chemical purity in the sensors and electronic components of better than 1 part in 10,000. Where do you purchase electronic components accurate to better than 1/100 of a percent, let alone better than 1/10,000. Even the contamination from being exposed to atmosphere after manufacture would affect the accuracy. The entire electronic measurement system would have to be designed so as to prevent any contamination of the electronics by contaminate that would affect the bridge network measuring the resistance change. Then these devices are placed in an ocean/moisture/salt environment? Absurd. I have experience calibrating devices to within +/- 0.1 percent. To do this required wearing fresh, new surgical cotton gloves. Once after calibrating an instrument, I touched the electronics near the bridge circuit. As a result the device went out of spec. It took four cleanings with medicinal grade alcohol and triple distilled water to get the device to properly calibrate again.

Latitude
Reply to  Joel Snider
April 6, 2018 1:30 pm

…no matter what the argument about today’s temps is……it’s a lie
sometime in the future today’s temp will be adjusted
We will never know what today’s temp is…today

David L. Hagen
Reply to  Joel Snider
April 6, 2018 3:48 pm

usurbrain
Thanks for pragmatic realism on accuracy. To quantify the formal limits, scientists need to apply BIPM’s international standard “GUM: Guide to the Expression of Uncertainty in Measurement”.
https://www.bipm.org/en/publications/guides/gum.html
Though BUM established this uncertainty standard, IPCC documents make no reference to it and little mention of or seriously addressing the major uncertainties involved. Note particularly Type B uncertainties (including systematic uncertainties.) These can be as large if not larger than Type A statistical uncertainties.
For the NIST background see Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results
https://www.nist.gov/sites/default/files/documents/2017/05/09/tn1297s.pdf

Alan Tomalty
Reply to  Joel Snider
April 6, 2018 9:44 pm

The climate scientists dont even know the difference between accuracy and precision. How do you expect them to know how many decimal places to report on? Ask Dr. Frank what he thinks of the ability of climate scientists to measure? Dr. Frank was in a blog discussion with a climate scientist that was trying to denigrate Dr.Frank’s expose of the accuracy of climate models. The climate scientist couldn’t even demonstrate a fundamental understanding of statistical error. And the world politicians are carbon taxing us based on those guys?

Nick Stokes
Reply to  MarkW
April 6, 2018 12:32 pm

“The answer to that is no, we only have to accept “adjustments” when the reason and the math behind the adjustments are fully produced and we agree with both.”
It really doesn’t make sense to even talk of adjustments to satellite data. They are the product of a long series of transformations and estimates. This is an adjustment to an adjustment.
After the massive changes going from V5.6 to V6.0 (and with RSS), there might be reasonable doubts about the robustness of satellite estimates.

Reply to  Nick Stokes
April 6, 2018 1:55 pm

Nick qrites

After the massive changes going from V5.6 to V6.0 (and with RSS), there might be reasonable doubts about the robustness of satellite estimates.

Agreed Nick. But I’d still trust the measurements more than and derived surface temperatures at the poles where we have few to no actual readings.

Editor
Reply to  Nick Stokes
April 6, 2018 1:56 pm

Not to mention the adjustments for GISS and HADCRUT!!
I wonder why you forgot to mention that?
The reaility is that we have not got a clue what the global temperature really is, (Not that there is such a thing anyway).
Hellfire, Nick. Even the US temperature record, which is backed up by thousands of weather stations, has been corrupted out of all recognition.
Perhaps you will now admit that we that nobody knows what is happening with global temperatures.

jim
Reply to  Nick Stokes
April 6, 2018 2:02 pm

Crikey Nick, I find myself agrreing with you again.
I must lie down somewhere dark……

jim
Reply to  Nick Stokes
April 6, 2018 2:07 pm

Paul, that is the point. Its no good saying satellites don’t ‘do’ global temperatures ( clearly they don’t, they are only good for measuring air movements) , without also saying that all the ground based measurements are also useless for trying to measure a frankly stupid notion of a ‘global temperature. Its all a load of B.S. And the so called precision is double B.S.

Nick Stokes
Reply to  Nick Stokes
April 6, 2018 2:11 pm

Paul H,
“Not to mention the adjustments for GISS and HADCRUT!!”
From here is a comparison plot of recent adjustments to UAH and RSS compared with GISS. It does not include this latest UAH. The curves are difference plots (after – before). The blue is UAH v6 – UAH V5.6. Orange is RSS V4 – V3.3. These are changes in the last three years. The green curves are the difference between current GISS and, respectively, archived (wayback machine) versions from 2005 and 2011. The adjustments to the satellite data are huge in comparison.comment image
TTTM,
“derived surface temperatures at the poles where we have few to no actual readings”
Satellites have no readings there at all.

Rick C PE
Reply to  Nick Stokes
April 6, 2018 2:13 pm

It really doesn’t make sense to even talk of adjustments to satellite data. They are the product of a long series of transformations and estimates. This is an adjustment to an adjustment.

OK. This would also apply to satellite measurement of sea level as well, would it not?
Or does it not matter since the results reported are averages of very large numbers of readings and the error/uncertainty is wiped away by the law of large numbers?
I suspect that the actual uncertainty in most of the climate data we see is at least the same magnitude as the anomalies reported.

John harmsworth
Reply to  Nick Stokes
April 6, 2018 2:16 pm

Isn’t somebody checking the bathwater with their fingers within 1200 kms good enough for most climate scientists anyway?

Reply to  Nick Stokes
April 6, 2018 2:42 pm

Nick Stokes
April 6, 2018 at 12:32 pm
Nick, given that there are ongoing adjustments to the adjustments do you in general give more credence to balloon records than satellites for particular areas of the globe?
I like the fact that Christy & co. were comparing baloon data sets using baloons with the same instruments over time…one less chance for “corrections” to creep in.

MarkW
Reply to  Nick Stokes
April 6, 2018 2:49 pm

Compared to the adjustments the surface network has been put through, satellites are downright pristine.
Regardless, the reasons and 100% of the math has been made available for others to critique.
Most of the surface network that math behind those adjustments is still protected like it was a state secret.

MarkW
Reply to  Nick Stokes
April 6, 2018 2:51 pm

Rick, the modern standard is thus.
If the measurement advances the government’s agenda, the measurement shall be deemed accurate.

Nick Stokes
Reply to  Nick Stokes
April 6, 2018 3:01 pm

Alastair
“I like the fact that Christy & co. were comparing baloon data sets using baloons “
That is a highly qualified comparison. Balloons are very inhomogeneous – flight paths are never the same, instruments change. They hav a whole lot of processing to decide what to include. Here is one key phrase:
“IGRA stations were accepted if at least 240 monthly observations were available (of the 450 possible in the Jan 1979 to Jun 2016 period.) Then, a final quality check was performed. If the monthly correlation of anomalies between the unadjusted station data and the satellite data at the corresponding gridpoint exceeded 0.70, the station was accepted for the comparison studies to follow.”
IOW, they allowed only the ones that agreed.

Reply to  Nick Stokes
April 6, 2018 3:04 pm

what does that say about the surface data sets old st Nick 😀

Michael Jankowski
Reply to  Nick Stokes
April 6, 2018 3:36 pm

“…IOW, they allowed only the ones that agreed…”
That’s most of climate science. That’s how the dendro’s do it. That’s how Mann does it..
But now you have a problem with it? At least they don’t use the balloons to “homogenize” the data.

Nick Stokes
Reply to  Nick Stokes
April 6, 2018 3:59 pm

“At least they don’t use the balloons to “homogenize” the data.”
In fact, they use the satellite data to homogenize the balloons:
“When a significant shift in the difference-time-series is detected (by the simple statistical test of the difference of two segments of 24-months in length on either side of the potential shift) we then adjust the radiosonde to match the satellite at that shift point.”

Reply to  Nick Stokes
April 6, 2018 4:00 pm

“… the product of a long series of transformations and estimates. …”, which closely follows the same trend which balloon radiosonde readings show. It seems that the scientists do a very good job with their transformations and estimates.

Nick Stokes
Reply to  Nick Stokes
April 6, 2018 4:13 pm

” which closely follows the same trend which balloon radiosonde readings show”
After various selections and manipulations to the balloon set, as shown. But in fact it doesn’t say that. The abstract says:
” We examine four satellite datasets producing bulk tropospheric temperatures, based on microwave sounding units (MSUs), all updated since IPCC AR5. All datasets produce high correlations of anomalies versus independent observations from radiosondes (balloons), but differ somewhat in the metric of most interest, the linear trend beginning in 1979.”
They all produce high correlations. But you may recall that the datasets produce very different trends; UAH (V6, not V5.6) is the outlier. RSS V4 and UAH V5.6 were much closer to surface trends.

Latitude
Reply to  Nick Stokes
April 6, 2018 4:14 pm

“The adjustments to the satellite data are huge in comparison.”…
They don’t need to adjust as much for UHI…it’s already built in with the algorithms that’s run
….end results is it’s adjusted a lot more

Komrade Kuma
Reply to  Nick Stokes
April 6, 2018 4:33 pm

Gee Nick and there would not be reasonable doubts about the land based temperature set too? The current sea surface set via the Argo buoys is probbaly up to spec but pre Argo, frankly it was a bit of a joke from a global temperature trend point of fbiew IMO. Bucket thrown over the side by a stoker or a deck hand and the Chief Engineer recording a number he liked that meant he could push his engine harder? Historical reality biased understatement = apparent modern uptrend.
I can accept that we humans have been producing larger and larger amounts of CO2 but you know what, the relative increase in atmospheric CO2 is almost invisible compared to the relative increase in bitumen, concrete and steel and the heat island effect they produce in the proximity of all those land based instruments. HI effect is an order of magnitude or so greater than the asserted ‘global warming’ so it is hardly a matter to ignore or justify dodgy, self serving ‘adjustments’ (i.e. intellectual coal in the PR boilers to generate publicity puff and steam in the pursuit of funding).

MarkW
Reply to  Nick Stokes
April 6, 2018 7:25 pm

They don’t need to adjust the ground based measurements that much, since most of it (infilling) is made up from scratch, to be whatever is needed.

Tsk Tsk
Reply to  Nick Stokes
April 6, 2018 8:25 pm

The adjustments to the satellite data are huge in comparison.

comment image
Sorry, what was that you were claiming about lower adjustments in GISS?

Nick Stokes
Reply to  Nick Stokes
April 6, 2018 9:36 pm

“Sorry, what was that you were claiming”
No. What you show is US, not global. And it is a different period, and different versions of GISS.

Reply to  Nick Stokes
April 7, 2018 4:05 pm

Nick writes

Satellites have no readings there at all.

You’re being pedantic Nick. Satellites cover the poles even if they don’t cover the actual north and south poles themselves. The point being the satellites cover the region. The surface readings don’t but that’s where much of the warming is suggested to be happening.

Nick Stokes
Reply to  Nick Stokes
April 8, 2018 3:31 am

“The point being the satellites cover the region.”
Not really. UAH claims -85 to 85°, but RSS, using the same data, claims -70 to 82.5°. I think RSS’ caution is justified. RSS also has an altitude limit which cuts out Tibet and high Andes.

Reply to  Nick Stokes
April 8, 2018 5:21 am

Nick writes

Not really.

Compared to surface measurement…yes, really.

Reply to  Nick Stokes
April 10, 2018 8:56 am

The same inaccuracies are implicated with so called rising oceans measurements. Look here and see why.

Reply to  MarkW
April 6, 2018 4:30 pm

christy doesnt post his code or raw data.
noaa does both
berkley [earth] does both.
you never actually looked for the adjustment code or explainations.
finally. science does not care if you accept it or not. your opinion…worthless

Alan Tomalty
Reply to  Steven Mosher
April 6, 2018 9:55 pm

The only data I really trust are data like the temperature records that have been recorded for the last 83 years in Augusta Georgia. They are anal at the Master golf tournament and always have been . Nothing but the best .No tampering no adjustments just the records. And what do you know their 83 years of temp records at the Masters golf tournament show no warming. There must be hundreds of local temperature records like that all over the world. Lets find them and examine the data. I bet you none will show any significant warming.

Bob bider
Reply to  Steven Mosher
April 7, 2018 7:13 am

Steven
You are all apinion, same outcome.

Jared
Reply to  Steven Mosher
April 7, 2018 8:53 am

Berkley and NOAA data are off by several degrees. I live in the middle of nowhere next to a tiny little town. Just the other day at 6:00 AM it was 45 degrees in town and 40 degrees out of town (there is no elevation change and just 2 miles separates the middle of town measurement to the out of town measurement) Other days the UHI is 0, and I’ve seen UHI of as much as 12 degrees. So how does the great Steven Mosher adjust for the UHI? The population of this little town was just 1,000 in 1900. How much UHI was there in this town on January 8, 1905? (2 degrees???) How much UHI was in this town on January 8, 2018? (7 degrees). Congrats on measuring UHI Steven Mosher.

Phil R
Reply to  MarkW
April 6, 2018 6:46 pm

Agreed with the caveat that the ORIGINAL data must also be available, so that other people can independently check the adjustments, and also propose adjustments that they think are reasonable.

Man Bearpig
Reply to  Phil R
April 7, 2018 2:51 pm

You can’t have the data, you only want to find something wrong with it.

Hugs
Reply to  MarkW
April 7, 2018 4:40 am

I don’t think it makes a difference whether I accept or not.
Rather I find this a slightly amusing turn. The hotspot refuses to be seen at UAH. That’s something pretty significant. Note that there is a big big gang of theoreticians who expect a hotspot, so this is like a mudshot to their eye.

Wayne Townsend
April 6, 2018 11:38 am

So, does the serendipitous correction mean that the UHA temp charts are already corrected, or do they need further correction?

MarkW
Reply to  Wayne Townsend
April 6, 2018 2:53 pm

What’s serendipitous about it? Unless you are just trying to create a distraction.
A difference was discovered between one satellite and other satellites as well as balloon data for the same areas.
The difference was researched, a reason found, a correction calculated.
If you can find fault with either the reason or the correction, I heartily invite you to present them.
If your only goal is to whine, then please, go away.

Reply to  Wayne Townsend
April 6, 2018 5:24 pm

When RSS revised their output, just before Paris, to get rid of the 25 years of no warming, Christy was the first to be select to peer review the paper (as reported on Roy Spencer’s blog). When, in the process, he asked some embarrassing questions, the journal refused publication pending some clarifications. RSS revised the report somewhat and requested a different reviewer, which was granted. The paper was hen speedily passed for publication.
Christy reported on his take (on Roy Spencer’s blog, and perhaps elsewhere) on the “corrections” RSS discovered that eliminated their “pause”, saying that it appeared to him that 80% of that warming was due to using said satellite data — with no corrections for the know problems, plus another satellite, minus corrections, because they (RSS whomever) wasn’t really sure what was right and what wasn’t. UAH had given up using that satellite long before.
The other 20% of the new warming came from adjustment of their data against climate models, not using any actual reference data.
I read this new paper as Christy and Spencer putting numbers to what they discovered at that time. Perhaps it just took this long to nail it all down to particulars.

Alan Tomalty
Reply to  AndyHce
April 6, 2018 10:01 pm

Did I read that right. “adjustment of their data against climate models”
You mean they actually adjusted temperature data based on climate models? Agggggggggggggggggggggggggggggh

Chimp
April 6, 2018 11:41 am

This shows why the dreaded “reanalysis” is sometimes warranted.

Reply to  Chimp
April 6, 2018 11:55 am

Careful. This post shows how for this one instance, while providing open and reproducible data, methods and algorithms you can make a logical, science based argument that reanalysis of the data in this specific case is warranted. Applying this post more generally as a pass to those who keep secrets is not warranted.

Chimp
Reply to  Boulder Skeptic
April 6, 2018 12:09 pm

Which is why I said “sometimes”.
Not all reanalysis is done to obfuscate and lie. It has its proper uses.

Joel Snider
Reply to  Boulder Skeptic
April 6, 2018 12:23 pm

‘Not all reanalysis is done to obfuscate and lie.’
Agreed. But remember, this entire venture is based upon assigning predictability – multi-decadal and even century-plus predictability – to complex systems that we don’t even know all the facets to.

MarkW
Reply to  Boulder Skeptic
April 6, 2018 2:54 pm

And not all models are pieces of carp.
As I said earlier, judge each case on it’s merits.

Alan Tomalty
Reply to  Boulder Skeptic
April 6, 2018 10:04 pm

MarkW April 6, 2018 at 2:54 pm
“And not all models are pieces of carp.”
Tell me which climate model does not have a built in error of physics that is so large as to render it useless for projections? I would like to invest in that climate model.

April 6, 2018 11:46 am

K-T and assorted clone diagrams of atmospheric power flux balances include a GHG up/down/”back” LWIR energy loop of about 330 W/m^2 which violates three basic laws of thermodynamics: 1) energy created out of thin air, 2) energy moving (i.e. heat) from cold to hot without added work, and 3) 100% efficiency, zero loss, perpetual looping.
One possible defense of this GHG loop is that USCRN and SURFRAD data actually measure and thereby prove the existence of this up/down/”back” LWIR energy loop. Although in many instances the net 333 W/m^2 of up/down/”back” LWIR power flux loop exceeds by over twice the downwelling solar power flux, a rather obvious violation of conservation of energy.
And just why is that?
Per Apogee SI-100 series radiometer Owner’s Manual page 15. “Although the ε (emissivity) of a fully closed plant canopy can be 0.98-0.99, the lower ε of soils and other surfaces can result in substantial errors if ε effects are not accounted for.”
Emissivity, ε, is the ratio of the actual radiation from a surface and the maximum S-B BB radiation at the surface’s temperature. Consider an example from the K-T diagram: 63 W/m^2 / 396 W/m^2 = 0.16 = ε. In fact, 63 W/m^2 & 289 K & 0.16 together fit just fine in a GB version of the S-B equation.
What no longer fits is the 330 W/m^2 GHG loop which vanishes back into the mathematical thin air from whence it came.
“Their staff is too long. They are digging in the wrong place.”
“There is no spoon.”
And
The up/down/”back” GHG radiation of RGHE theory simply:
Does
Not
Exist.
Which also explains why the scientific justification of RGHE is so contentious.
https://www.linkedin.com/feed/update/urn:li:activity:6384689028054212608

Thomas Homer
Reply to  nickreality65
April 6, 2018 12:30 pm

Bravo nickreality65!
[ … include a GHG up/down/”back” LWIR energy loop of about 330 W/m^2 … ]
Is that the same GHG ‘energy loop’ that is purported to keep the entirety of Earth’s atmosphere 30+ degrees warmer than it would be without that ‘energy loop’? Let’s imagine how much energy it would take to warm the open air stadium in Green Bay by 30 degrees. Since it’s open air, the heat keeps escaping, just like Earth’s atmosphere. It’s going to take a lot of continuous energy. That ‘energy loop’ is continually doing that much work around the globe yet we can’t measure it?

Reply to  Thomas Homer
April 6, 2018 1:42 pm

[snip – off topic, multiple links driving traffic to your website. -mod]

MarkW
Reply to  nickreality65
April 6, 2018 2:56 pm

1) Energy isn’t being created.
2) Energy flows from cold to hot all the time. It’s that the net flow is usually the other way. Regardless when something not as cold covers something colder, other things in the system will warm, even if they are still warmer than the not so cold thing.
3) Energy transfer from photons to molecules and back again, is 100% efficient. I’m not sure what you are complaining about here.

Reply to  MarkW
April 6, 2018 4:52 pm

1) Yeah, it is. See K-T balance. 160 W/m^2 arrives at the surface. Leaving the surface: 17 W/m^2 convection + 80 W/m^2 latent + 63 W/m^2 adds up to 160. Where does that 333 W/m^2 come from? A bogus application of S-B BB.
2) Not without adding work, e.g. refrigerator.
3) Not according to Al’s award winning photo-electric equation, i.e. work function. Photons leaving have less energy than photons entering. Don’t get all that vibration, oscillation, rotation, orbit changing for free.

MarkW
Reply to  MarkW
April 6, 2018 7:29 pm

1) Your chart is incomplete. Complete ones balance.
2) Completely false. You are confusion conduction with radiation.

MarkW
Reply to  MarkW
April 6, 2018 7:31 pm

The temperature of an object is based on the sum of all radiations into the object. The temperature of the object will increase until the radiation out equals the radiation in.
Of a cold object covers a colder object, the amount of radiation a hot body receives increases. As a result the temperature MUST increase until incoming matches outgoing again.

Patrick J Wood
April 6, 2018 11:58 am

It took this long to “discover” the “error”, or did the fear of actual examination of their work bring forth the admission?

Latitude
Reply to  Patrick J Wood
April 6, 2018 12:27 pm

…even funnier…using the old technology to calibrate the new fancy smancy technology

Reply to  Patrick J Wood
April 6, 2018 12:38 pm

PW, the problem has been known for years. I wrote about it in the climate chapter of The Arts of Truth. What was done in this paper, a follow on to the UAH V6 paper describing the better aperture correction (Spencer posted a version on his blog two years ago) is to quantify the NOAA 14 bias that crept in, minimized in V6 by not using it even ‘corrected’ after 2001. This also helps explain the difference between the new ‘Mearized’ RSSv4 and UAH v6.
It has minimal impact on the gross observational absence of the modelled hotspot known since AR4.

Sheri
Reply to  Patrick J Wood
April 6, 2018 5:31 pm

PJ: Since there will never be any actual examination of NOAA, so I don’t expect any paper from them. I am guessing you believe that is the only possible reason anyone would double-check.

whiten
April 6, 2018 12:02 pm

From the top of my mind, got to ask, what is the Satellites main time synchronization with earthly time measuring metrics?
Am asking as I am not sure!
If Greenwich synchronization not “on”, the error margin could be significant over time.
And maybe I happen to misunderstand all this!
Just saying.
cheers

April 6, 2018 12:13 pm

All temperature data must be “aged”
for 25 years before being used for real science.
25 years should be enough time for the
repeated data “adjustments” to settle down.
After 25 years, the average temperature will be
about the same anyway — not enough of a change
for people to notice … unless hysterical leftists
are running in circles bellowing: “the world is
melting from global warming — head for the hills”.
After 20 years of reading about global warming,
my confidence level in the average temperature
compilation has declined to below zero,
if that’s possible.
If you want to convince me that climate change
is harmful, then show me who has been harmed
… by the greening of the Earth,
and warmer nights in the Arctic
= no one hurt — everyone should be happy !.
The coming climate change catastrophe is the
biggest hoax in history — no one knows what the
future climate will be, and there is no logical reason
to complain about the slight warming in the past
150 years (assuming the warming is real, rather than
just measurement errors).
My climate blog
for people with common sense,
with no predictions of
the future climate … because
predictions are a waste of time !
http://www.elOnionBloggle.Blogspot.com

John harmsworth
Reply to  Richard Greene
April 6, 2018 2:22 pm

Your confidence level in the average temperature compilation apparently requires “adjustment”, which AGW science can easily provide. This will raise your confidence level from below zero to well above 100%. Wrong, but very sure about it.

April 6, 2018 12:21 pm

Re: “…the metric of most interest, the linear trend beginning in 1979. The trend is an indicator of the response of the climate system to rising greenhouse gas concentrations and other forcings.”
CO2 levels really took off around 1950. The log(CO2) forcing over the first 25 years of that period was 45% of the log(CO2) forcing over the last 25 years of that period:
https://www.sealevel.info/co2.html?co2scale=2
http://sealevel.info/co2_log_scale_thru_2017.png
So if you’re looking for evidence of a “CO2 signal” in other climate data, like temperatures or sea ice coverage, it would be best to examine that full period.
It is unfortunate that there were no satellites measuring temperatures for the first 29 years, because omitting that period amounts to accidentally cherry-picking an anomalous starting point. Measuring from that anomalous starting point exaggerates the correlation between CO2 and temperature, because for over a quarter century CO2 (and other GHG) levels rose quite dramatically, yet temperatures did not rise at all. You can see what I’m talking about in these graphs from Hansen et al 1999:
http://sealevel.info/fig1x_1999_highres_fig6_from_paper4_27pct_1979circled.png
http://sealevel.info/fig1x_1999_highres_fig4_from_paper_27pct.png
That inconvenient quarter century (by the end of which the world was in a tizzy over the threat of global cooling) suggests that CO2 is a pretty lousy “control knob,” and it is why it’s so tempting for some folks to pretend that the climate world began in 1979.
W/r/t satellite temperature measurements, it’s an unavoidable problem, but we should at least note that it probably results in exaggerated estimates of climate sensitivity and CO2 culpability.
Unfortunately, the sea ice graphs used by alarmists also usually start with 1979, and that is less forgivable. Satellite measurement of ice coverage/extent with passive microwave instruments (which can see through clouds) began in 1973 with Nimbus 5 (launched Dec. 11, 1972), and satellite measurement data of ice volume didn’t begin until 2003. Yet climate alarmists almost always start their graphs with Nimbus 7 data, in 1979. Here are some examples:
http://psc.apl.uw.edu/research/projects/arctic-sea-ice-volume-anomaly/
http://sealevel.info/BPIOMASIceVolumeAnomalyCurrentV2.1_2018-04-06_40pct.png
https://www.ipcc.ch/pdf/assessment-report/ar5/wg1/WG1AR5_Chapter04_FINAL.pdf#page=15
http://sealevel.info/AR5_chapt4_p15_topgraph.png
http://sealevel.info/nasa_jpl_vital-signs_arctic_sea_ice_min_2018-04-06.png
https://climate.nasa.gov/vital-signs/arctic-sea-ice/
The problem with that starting date is the same for sea ice as it is for temperatures. This graph is fom the IPCC’s First Assessment Report (I added the red circles).
http://www.sealevel.info/ipcc_far_pp224-225_sea_ice2_1979circled.png
The first two IPCC Assessment Reports showed graphs of sea ice extent starting in 1973. Those graphs were not included in later Assessment Reports. I imagine that if you asked why the IPCC no longer uses sea ice data from prior to 1979 you’d be told that the Nimbus 7 multichannel instrument was superior to the earlier single-channel instrument aboard Nimbus 5. That is true, but, as you can see, it is also true that the 1979 starting point is awfully convenient.

Reply to  daveburton
April 6, 2018 2:16 pm

Sorry, I didn’t mean to “shout.” I should have shrunk that last graph, especially.

Reply to  daveburton
April 6, 2018 2:48 pm

daveburton
April 6, 2018 at 2:16 pm
Gosh Dave, I don’t think you were shouting…just providing some very interesting comparison graphs…keep it up.

Latitude
Reply to  daveburton
April 6, 2018 4:18 pm

dave….excellent post…and covered it all

John harmsworth
Reply to  daveburton
April 6, 2018 2:38 pm

My Goodness! Does the government know about this? Lol?

BallBounces
April 6, 2018 12:22 pm

“will carry the extra fuel needed to keep them closer to home throughout their time in space.” Let’s get as much of that planet-killing fuel out into space as we can. Saving the planet — one sat at a time!

TonyL
April 6, 2018 12:26 pm

They discontinued the use of NOAA-14 in 2001, which conveniently sidestepped the worst of the data contamination due to orbital drift, as was revealed later. This is an excellent example of the power of thinking about what you are doing. A research technique which can be in rather short supply these days.

Reply to  TonyL
April 7, 2018 6:27 am

+1

Roy Frybarger
Reply to  TonyL
April 7, 2018 9:38 am

Rationality – forever in short supply.

Keith bryer
April 6, 2018 12:36 pm

You will never see this in the mstream
media

April 6, 2018 12:40 pm

Very nice paper. They knew of the problem before v6, minimized it, and have now quantified it. Wonder if Mears at RSS will further revise his v4 based on this?

Reply to  ristvan
April 6, 2018 4:02 pm

In their paper Mears et al. found the following: “If we exclude MSU data after 1999 (implicitly assuming the error is due to NOAA-14), the long-term trend decreases by 0.008 K decade−1”

April 6, 2018 12:49 pm

Why did only one satellite go off piste?
Drill wide. Are the rest OK?
That should be a conclusion that is reported even if the answer is a dull “They’re fine”.

TonyL
Reply to  M Courtney
April 6, 2018 2:21 pm

Satellite drift has been a *huge* PITA since the earliest days. The modern and powerful AQUA and TERRA satellites have ion thrusters which allow active station keeping over the life of the mission.
Finally.

Don K
Reply to  M Courtney
April 6, 2018 2:43 pm

“Why did only one satellite go off piste?”
You put the satellite on top of a couple of huge rockets, fire them off, and, given luck, the satellite goes up and into orbit. Sometimes the orbit isn’t quite what you hoped for, but mostly, any orbit is better than no orbit.
Modern satellites often have at least some maneuvering capability that allows satellites to be moved into or at least toward, the desired orbit. Older satellites had less or no maneuver capability.

Dr Deanster
April 6, 2018 12:50 pm

I went to UAH, and Spencer’s site …. but not finding what I’m looking for. I’d like to see one of those global maps with all the pretty colors that they put up for the monthly anomaly, except I’d like it to be for the life of the measurements. Looking at all the monthly maps, I’m not seeing any “global”warming …. I’m just seeing Arctic Warming. I did a Willis style comparison of two cause and effect events in a talk the other day. Global Greening vs Global Warming. It’s pretty clear, CO2, a well mixed gas, is having a global fact in global greening . If we are to accept that CO2 warms the entire globe via one mechanism, radiation, then we would expect to see a similar global effect as is seen in Global Greening. Based on the monthly maps, I’m not seeing it. All I see is Arctic Warming, and that could be due to any number of causes.
So … if any of you have a link to a global temp map stretching back to 1980, I’d appreciate it.

April 6, 2018 1:10 pm

The errors from NOAA-14 have been known by GISS for some time. This is a plot of the raw monthly average temperatures reported in the ISCCP data set.
http://www.palisad.com/co2/bias/temp.gif
The discontinuity in late 2001 was when NOAA-14 was the lone polar orbiter remaining and was replaced with NOAA-16. NOAA-15 was supposed to last far beyond 2001, but failed early in mid 2000. Not only was NOAA-14 drifting, NOAA-16 had different LWIR sensor characteristics. Ordinarily, the cross calibration would compensate for both of these, except that it depended on continuous and redundant polar orbiter coverage. Unfortunately, there was no other operating satellite when NOAA-14 was replaced which caused the temperatures calculated from NOAA-16 sensors and later satellites trend higher. Note that when you apply 5-year averaging, the 1 month discontinuity becomes a trend.
http://www.palisad.com/co2/bias/temp_5.gif
I reported this cross calibration error to Rossow who was in charge of the ISCCP project under GISS about a decade ago.

Latitude
Reply to  co2isnotevil
April 6, 2018 4:20 pm

+1

Alan Tomalty
Reply to  co2isnotevil
April 6, 2018 11:10 pm

So can we get a corrected graph 1984 to 2017 actual temperatures (adjusted for satellite drift etc….) so as to give a true picture of what the upper troposhere temp was and is?

Reply to  Alan Tomalty
April 7, 2018 10:02 am

Alan,
I can adjust the offset and correct for drift, but I don’t have a good handle on the precise drift between when NOAA-15 died in mid 2000 and when NOAA-16 replaced NOAA-14 in late 2001. It would look something like this:
http://www.palisad.com/co2/bias/temp_fb.gif

Ill Tempered Klavier
April 6, 2018 1:11 pm

Keep in mind they are not changing the sensor readings here, but the method of calculating a temperature from those readings. The gadget still reads what it read.
When you can show a physical effect and accurately quantify it, then there is a reasonable case for including it in your calculations.
Ad hoc values as a Finagle Factor, not so much.

bill hunter
April 6, 2018 1:17 pm

A tenth of a degree per decade at the hotspot when six tenths was expected suggest negative feedback.

Curious George
April 6, 2018 1:20 pm

Satellite measurements of temperature or a sea level are highly indirect. A direct measurement by a thermometer or a tide gauge is less subject to errors and therefore preferable. However, we can’t cover all coasts with tide gauges, or the land surface with thermometers. Satellite measurements have their important function in covering huge areas, but they depend on so many assumptions that I always take them with a grain of salt – or two.

MarkW
Reply to  Curious George
April 6, 2018 5:10 pm

To paraphrase, All measurements are wrong, but some are useful.

Sheri
Reply to  MarkW
April 6, 2018 5:34 pm

Some are even useful for science.

ResourceGuy
April 6, 2018 1:24 pm

Send a large slide rule up on the next rocket.

Jarryd Beck
April 6, 2018 1:42 pm

I don’t understand how it can cross the equator 14 times a day at 1:30. That sounds like twice: one in the morning, one in the afternoon.

Sweet Old Bob
Reply to  Jarryd Beck
April 6, 2018 1:59 pm

Time zones ….it’s 5 o’clock somewhere …

TonyL
Reply to  Jarryd Beck
April 6, 2018 2:14 pm

Refer to the graphic at the top of this post. Note the fuzzy blob of the sun at the top center and the intended orbit of the satellite which is the (mostly) vertical blue line. The satellite orbit has a constant relationship with the sun. From this view, the Earth rotates in the counter clockwise direction.
So every orbital pass, the satellite passes over a different part of the Earth as the Earth turns under it, but is always 1:30 pm local time because of the fixed relationship with the sun. The satellite completes an orbit once every ~100 minutes.

Jarryd Beck
Reply to  TonyL
April 6, 2018 3:16 pm

Ah gotcha.

April 6, 2018 2:21 pm

That reminds me of the NASA Quixeramobim: in 1900, 2 degrees Celcius difference, in 1900, no airports and Quixeramobim is sub-tropical: no heating of houses, so no ‘heat source contamination’:
https://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=303825860000&dt=1&ds=1
https://data.giss.nasa.gov/cgi-bin/gistemp/show_station.cgi?id=303825860000&dt=1&ds=12

DMacKenzie
Reply to  pietjemol
April 6, 2018 8:13 pm

These giss.nasa graphs need some description as to when each was published, which is raw, which is adjusted, etc., who did the adjusting….

u.k.(us)
April 6, 2018 2:30 pm

Have they updated the guidance systems of ICBM’s since the 60’s, it would a shame to nuke yourself 🙂

Chimp
Reply to  u.k.(us)
April 6, 2018 2:44 pm

Yes.
The Guidance Replacement Program (GRP) for Minuteman III ICBMs, initiated in 1993, replaced the disk-based D37D flight computer with a new one that used radiation-resistant semiconductor RAM.
Another GRP, completed in 2008, replaced the NS20A Missile Guidance Set (MGS) with the NS50A MGS. The newer system extends the service life of the Minuteman missile beyond the year 2030 by replacing aging parts and assemblies with current, high reliability technology while maintaining the current accuracy performance.
Even greater advances have been made in Sea-Launched Ballistic Missile guidance.

Reply to  Chimp
April 6, 2018 2:55 pm

Chimp
April 6, 2018 at 2:44 pm
A bit off topic…sorry mods.
Does the US have the same nuclear powered drone technology (both above and below the water) the Russians have or has the green mob stopped develoment of new nuclear technology?
Could this technology be used for the next version of ARGO recorders…able to station keep for months or years?

Chimp
Reply to  Chimp
April 6, 2018 3:21 pm

The US developed nuclear-powered aerial drones, but the project was shelved under Obama.
I might be out of the loop, but don’t know of any US program comparable to the Russian intercontinental (trans-Pacific) nuclear torpedo.

Tsk Tsk
Reply to  Chimp
April 6, 2018 8:40 pm

The D-5 used plated wire. I know the diagram shows a “disc” but I would be surprised if it actually had a disc and not a wire spool.

Reply to  Chimp
April 7, 2018 5:33 am

If missile launch stations/silos require accurate local temperatures, have those records been retained and what do they show?
They should not show much UHI effect. Nobody living too close, might one assume?

Non Nomen
April 6, 2018 11:19 pm

This underlines the importance of all data, raw or “adjusted”, as well as questioning and cross-checking them. A certain Mann doesn’t want his data made public at all. It becomes more and more obvious, why.

O R
April 6, 2018 11:50 pm

This is not very convincing by Spencer and Christy.
A lot of beliefs but no proper evidence. They claim that the drift of NOAA-14 is too large and can’t be corrected, but they can’t demonstrate that this is a fact..
Thus, they have no evidence that their personal choice of satellite, NOAA-15 vs NOAA-14, is right.
All that is needed is a simple difference chart where the effect of the choice is validated against independent data over the period of interest, the overlap 1999-2005.
Here’s a comparison of UAH and RSS TMT versus an average of the neighbour AMSU-channels 4 and 6:
http://postmyimage.com/img2/381_image.jpeg
UAH drops like a rock due to the personal choice by Spencer and Christy. The drop stops when the nondrifting AQUA satellite comes in and ” corrects” NOAA-15.
RSS is only half wrong since they use both NOAA-14 and NOAA-15, thereby splitting the error.
Actually, the chart above supports that NOAA-14 is right and NOAA-15 wrong.
The divergence in UAH vs background is almost spot on the divergence between MSU and AMSU in Mears et al 2015, Figure 7 C:
http://postmyimage.com/img2/917_image.jpeg
I have done similar comparisons ( difference charts) using all kinds of radiosonde and reanalysis data, but they are even worse for satellites in general and UAH in specific.
So I challenge Spencer and Christy, or those who believe in them. Please present any evidence, in form of a simple difference chart like the one above, that supports the claim that NOAA-15 is right and NOAA-14 wrong.

O R
Reply to  Latitude
April 8, 2018 8:58 am

That comment has absolutely nothing to do with MSU/AMSU sensors..

Non Nomen
Reply to  O R
April 8, 2018 3:19 am

This is not very convincing by Spencer and Christy.

Convincing or not remains to be seen. The fact that they are cross-examining data that are available (not as Mann’s hidden hokey-pokey bristlecone-driftwood numbers) and putting their conclusions to the public test qualifies them as ‘good’ scientist. Be the outcome as it may, they are forwarding science by asking questions and not blazoning dubious hockeyschticks as ‘nothing but the truth’.

Dr. Strangelove
April 7, 2018 12:59 am

This new paper by Christy et al is not looking good for GHE warming hypothesis. Not only the tropical mid-troposphere hotspot is only one-sixth the warming predicted by IPCC models (0.1 C/decade vs. 0.6 C/decade) but more importantly the low troposphere is warming faster than mid-troposphere (0.08-0.17 C/decade vs. 0.1 C/decade) This contradicts the models which show mid should be warming twice faster than low. This is not the sign of GHE warming.
GHE warming is due to radiative heat transfer by SB law:
j = e o T^4
The derivative of flux:
dj/dT = 4 e o T^3
The higher the temperature, the more energy needed for differential increase in temperature. Conversely, the lower the temperature, the less energy needed. Mid-troposphere is cooler than low troposphere. It should warm faster but that’s not what satellite data show.

Dodgy Geezer
April 7, 2018 1:47 am

Seems fairly obvious to me.
If a scientist discovers a real natural phenomenon, people will soon start to study it and will develop instruments and techniques capable of easily isolating the data they want
If a scientist claims to have discovered a natural phenomenon which actually doesn’t exist, people who depend on studying it for their jobs will still develop techniques for isolating the data they need – in this case playing with the variability of sensors at the limits of their accuracy, and producing spurious ‘adjustments’….

Dodgy Geezer
April 7, 2018 2:07 am

If the facts are on your side, argue the facts.
If the facts aren’t on your side, argue the adjustments….
If the adjustments aren’t on your side, just argue. Ad-homs will do quite nicely…

Alan Tomalty
April 7, 2018 2:10 am

Another logical light bulb went off in my head. Since the Masters golf tournamnet 83 year (same time frame as almost all of greenhouse gas emissions by man) temperature history shows no warming; what are the odds of only one place on earth being like that? I will venture to guess that the odds would go to 9 sigma. For those uninitiated; 5 sigma is 99.00004 % against which is the standard for physics. Medicinal studies are unfortunately only 2 sigma. I would guess that climate science is no better than 1 sigma in the end.
Soooo if those odds were astronomical against , then the odds must be 9 sigma for the possibility of there being 1000’s of sites around the world that are like Augusta which is being played this weekend. So since there are 1000’s of such sites there must be temperature records of some of them. Each one is a nail in the coffin of global warming. The odds would be 9 sigma against that all those sites exist and global warming is real at the same time. Because Mr CO2 doesnt discriminate. He is evenly mixed in the atmosphere everywhere , then how can there be global warming in only parts of the world. Since it rains moderately
in Georgia, then CO2 doesnt have any effect in Georgia or else what makes Augusta Georgia so special that it has defied Mr. CO2? If Augusta Georgia isnt special (except for the golf course there and the Masters held there every year for 4 days) then Mr CO2 has an achilles heel. Since we have shown above that there must be thousands of other sites including 10 of the 13 stations in Antarctica that have stood up to Mr CO2s bullying by showing no warming, then I say that MR CO2 has no clothes. .

Latitude
Reply to  Alan Tomalty
April 7, 2018 12:17 pm

…..plus what are the odds that two thermometers….less than 20 miles apart….and one shows warming…the other shows cooling….there’s no such thing as Urban Cold Island

April 7, 2018 6:28 am

Google are 100% manipulating searches on climate science.

DR
Reply to  Mark - Helsinki
April 9, 2018 2:08 pm

Yep, you’re right about that.

April 7, 2018 7:48 am

Yes, agree w/others, this seems obvious and should be acknowledged in the “science”, such as it is. CO2 has little effect on tropical areas as Willis & others have shown — cloud-feedbacks keep temps from rising (just more day-time clouds). The “CO2 tropical warming” that pops up in the models is garbage.

Ian H
April 7, 2018 8:05 am

I am not sure the issue has been described correctly. It has been presented as an issue of solar precession whereby the time of crossing shifts gradually from midday/midnight to dawn/dusk. But all polar orbits must precess like this since the plane of the orbit is fixed in space as the earth rotates about the sun.

whiten
Reply to  Ian H
April 7, 2018 9:12 am

Ian H
April 7, 2018 at 8:05 am
Ian, I am not very sure if I correctly get your point, but if it could help.
The “solar precession’ issue, in this particular subject, is not in regard of satellite’s navigator, or the navigating system.
it is or it could be a serious issue when considering the synchronization between it and the the time kipping as per recording of the data.
Some thing like the condition of the “Captains Cook daily record”.
The hourly timing on that record will be as per Captains belt watch or a mechanical time kipping clock , which over the time of see fairing adventure had to be synchronized to/by the navigator’s every day sundial time measurement (probably at midday 12:00 as per navigator’s sundial).
No synchronization in between, or a poor one will result in a considerable error over time, as per actual timing of the record taken and recorded.
For example, also, in a multi year record to be accomplished with a correct timing, in such cases, even the odd 366 day year could subject the process to a considerable degree of error, when such synchronization required, even when both, the navigation and record kipping in their own respective right do not have much problem with it……but still when a synchronization in between required, the odd year can be a pain that has to be dealt with, if the propagated error in such a case could end up messing the integrity of the record due to wrong time kipping.
Am not sure if this clearly enough put, or even that it may have any much sense.
Only trying to show that it could be a little more complicated than it may seem at first look at it.
cheers

Pamela Gray
April 7, 2018 10:53 am

At the peak of an interglacial period (in the past some have been sharp, some have been plateau-like), there may be decadal ups and downs. So all this .0001 accuracy nonsense is just that; nonsense. We are in a warm period. In general, we should be warmer compared to being on either an up-slope or a down-slope from/to a glacial period.
So someone discovers that some of our warm period sensors are a bit off. Fix it and move on. And if Nick gets nickers in a twist over what he believes to be data that is more accurate than something that was a bit off, well at least it keeps him busy and out of trouble.
As for me, I am glad it is warm. I don’t need to use as much wood heat and it is more fun fishing when it is warm than when it is cold. Except today I can’t go fishing because I did too much heavy lifting last week. All this is to say that it is WAAAAAYYYYY more important that I can’t go fishing today!!!!

Latitude
Reply to  Pamela Gray
April 7, 2018 12:18 pm

LOL….amen

April 8, 2018 2:05 am

Must say: I told you so.
How many times did I mention that we cannot unilaterally trust the sats because of the destruction of anything in space that you want to use to measure> Especially now, lately, for the past few years, the sun’s rays are terrible.
Better to look at Tmax and Tmin at the stations. But you have to use the correct sampling procedure. Data sets (like Best) that are completely biased to the NH are useless, since there has been little or no warming at the SH.
there is an explanation here:
https://wattsupwiththat.com/2018/04/05/a-look-at-the-ghcn-daily-minimums-debunks-a-basic-assumption-of-global-warming/comment-page-1/#comment-2784761

Steven Fraser
Reply to  henryp
April 8, 2018 9:51 pm

I wonder if the contract between UAB and NASA stipulates periodic reporting when the Satellites go out-of-spec.

jasg
April 9, 2018 2:26 am

There is a big difference between sensible calibrated adjustments to correct wonky sensor data and adjustments based on just making data up (as in the NASA extrapolation and C&W infilling over the Arctic), guessed TOBS adjustments (based on an implausible error graph implausibly applied and uncalibrated) or the replacement of good data with poor data in the NOAA pause-buster adjustments. As for the newly warmed-up RSS graph, the fact that their reviewers did not include anyone actually qualified to review it says everything.

DR
April 9, 2018 2:18 pm

LOL,
Hey Nick Stokes and Moshpit, why don’t you guys explain to Tony he’s just imagining things, and Carl Mears is stand up guy…..he’d never do anything unethical. ROTFLMAO
https://realclimatescience.com/2018/04/climate-mafia-at-work/
One would expect that Dr. Carl Mears would know how to spell satellite, and that he would also notice that even after he changed the data there is still a very large discrepancy between the models and observations, with observations falling at the very lower end of the model range. But let’s look how he changed the data. He simply got rid of his error range (light blue) and moved the temperature (black line) up to the very top of his error range.