UAH Global Temperature Update for June 2020: +0.43 deg. C

Reposted from Dr. Roy Spencer’s Blog

July 2nd, 2020 by Roy W. Spencer, Ph. D.

The Version 6.0 global average lower tropospheric temperature (LT) anomaly for June, 2020 was +0.43 deg. C, down from the May, 2020 value of +0.54 deg. C.

The linear warming trend since January, 1979 is +0.14 C/decade (+0.12 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).

Various regional LT departures from the 30-year (1981-2010) average for the last 18 months are:

 YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST 
2019 01 +0.38 +0.35 +0.41 +0.36 +0.53 -0.14 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.05 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.97 +0.58
2019 04 +0.44 +0.38 +0.51 +0.54 +0.49 +0.93 +0.91
2019 05 +0.32 +0.29 +0.35 +0.39 -0.61 +0.99 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.91 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.34 +0.87
2019 08 +0.39 +0.38 +0.39 +0.42 +0.17 +0.44 +0.23
2019 09 +0.61 +0.64 +0.59 +0.60 +1.14 +0.75 +0.57
2019 10 +0.46 +0.64 +0.27 +0.30 -0.03 +1.00 +0.49
2019 11 +0.55 +0.56 +0.54 +0.55 +0.21 +0.56 +0.38
2019 12 +0.56 +0.61 +0.50 +0.58 +0.92 +0.66 +0.94
2020 01 +0.56 +0.60 +0.53 +0.61 +0.73 +0.12 +0.66
2020 02 +0.76 +0.96 +0.55 +0.76 +0.38 +0.02 +0.30
2020 03 +0.48 +0.61 +0.34 +0.63 +1.09 -0.72 +0.16
2020 04 +0.38 +0.43 +0.34 +0.45 -0.59 +1.03 +0.97
2020 05 +0.54 +0.60 +0.49 +0.66 +0.17 +1.15 -0.15
2020 06 +0.43 +0.45 +0.41 +0.46 +0.38 +0.80 +1.20

The UAH LT global gridpoint anomaly image for June, 2020 should be available within the next week here.

The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:

Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt

0 0 votes
Article Rating
110 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
mario lento
July 2, 2020 2:10 pm

Looks scarier than Covid… we’re all going to drown with the virus, and there will be a double death toll count by then. One for death by climate change!

Grant A. Brown
Reply to  mario lento
July 2, 2020 9:03 pm

Does anyone actually believe that it is possible to calculate the average global temperature for a period of a month, down to the hundredths of a degree, even with satellites and supercomputers?

mario lento
Reply to  Grant A. Brown
July 2, 2020 10:06 pm

Impossible for global average. And, the precision is not down to 1/100th for RTD used either.

Samuel C Cogar
Reply to  mario lento
July 3, 2020 4:09 am

Impossible for global average.

Not really …. because all one needs is two (2) or more “numbers” to calculate an average.

I want to know, …. what was the “global average cloud cover” for the month of June for each of the past 20 years?

Iffen you don’t know what the “global average cloud cover” is/was, …… then any/all “global average temperature” calculations are little more than bogus propaganda (junk science) reporting.

mario lento
Reply to  Samuel C Cogar
July 4, 2020 11:22 am

Yes you can calculate anything. It might and probably does not tell you what you might think it does! Anyway, yes I agree with you. So the result is that you get an average number of the two numbers in your example… but one cannot interpret that to mean the we know the average temperature, we only know the average of two numbers.

Reply to  Grant A. Brown
July 2, 2020 10:41 pm

“Does anyone actually believe that it is possible to calculate the average global temperature for a period of a month, down to the hundredths of a degree, even with satellites and supercomputers?”

yes because its not an average. Its a prediction or expected value.

That is, it represents the best estimate of what you would measure if you used a prfect instrument at every location.

not an average in the usual sense of the term.

donald penman
Reply to  Steven Mosher
July 3, 2020 2:30 am

Yes of course the satellites use models to predict the actual temperature at any point at a certain time but it is still believed to be the actual temperature at that point and time. A bigger problem seems to me the period of time over which it is measured. It is averaged monthly temperature but it could just as well be averaged daily or averaged second by second. I wonder if we were to average the 500mlb height values over the globe for a month and the temperature they implied how close would we get to the satellite temperature for that month.

Reply to  Steven Mosher
July 3, 2020 6:24 am

Mosher –>> “not an average in the usual sense of the term.”

Perhaps it should then be called Global Predicted Temperature (GPT) rather than GAT!

If you follow the accepted SCIENTIFIC practice of SIGNIFICANT DIGITS there is no way to increase the precision of a temperature regardless of the mathematics (averaging or predicting) performed upon it. A great example is an average of 7° and 3°. Who decides how many decimal places to show and why?

Too many scientists use the ‘error of the mean’ to say that defines the precision of the mean value. It does not. It is a statistical descriptor of how close the mean to the true mean. It is NOT a descriptor of the precision of the mean!

While we’re at it, the ‘error of the mean’ does not describe nor affect uncertainty. Too many scientists quote the CLT and divide by √n when determining uncertainty. Totally wrong. If uncertainty budgets and propagation thru the calculations are too difficult the GUM allows standard deviation to be quoted. Although propagating variation of different distributions is not easy either.

Reply to  Jim Gorman
July 3, 2020 4:54 pm

“If uncertainty budgets and propagation thru the calculations are too difficult the GUM allows standard deviation to be quoted. ”

When would they be “too difficult”? You know how you spatially interpolate. You know the uncertainty of every value you include. You have a procedure to knock out suspicious outliers. You know enough about the data collected, past and present, to build correlation matrices of those uncertainties.

I probably have the computing horsepower on my home laptop, built for engineering calcs. If not, then they could tap any one of the tens of thousands of systems at most national labs, major corporations, and/or educational institutions that do. Any reservoir engineering modeler, in any large oil company HQ, working on this data set instead of the (much lower quality) reservoir data she is used to, could do it. Including a distribution outputs.

Sorry, not buyin’ it…

Reply to  Jim Gorman
July 3, 2020 10:23 pm

Jim,

What is the answer to how many significant digits we should be stating for “average” global temperature and the associated uncertainty?

I agree with Grant and Mario that 1/100 degree for a monthly average calculation feels inappropriate. It seems there are so many issues regarding an “average” calculation with accuracy, preciseness, frequency of measurements, grid size, extrapolation/interpolation of unmeasured cells, heat island effect, elevation/altitude issues, data homogenization…

Dr. Spencer’s stated average of 0.54 degrees for a monthly global average anomaly. That 0.04 degree significant digit seems inappropriate. And I agree with you that uncertainty should be stated.

Your thoughts?

mario lento
Reply to  RelPerm
July 4, 2020 12:23 am

I agree with your synopsis. Mosher is right in that, calculations can provide a precision to fractional decimal points… however, as you point out, without the degree of uncertainty, it’s value could be misinterpreted. There is no global temperature, however, if the measurement criteria is wide and repeatable, the changing numbers help us see what may be happening. What makes it tough is that temperature and energy change forms… e.g. warm air temperature has much more [latent]energy in it than drier air at the same temperature. So temperature without moisture content complicates what we derive by simply looking at temperature. Temperature sensing devices, the best of which are RTD, platinum sensors, are the best electronic devices we have, are at best are 0.1C. They do need to be calibrated to achieve that too.

Reply to  Jim Gorman
July 4, 2020 7:00 am

RelPerm –> Maybe I didn’t make my criticism very well. It’s basically that UAH is done so differently from actual measured temperatures that extreme caution is needed when comparing them.

Dr. Spencer needs to develop and publish the uncertainty inherent in his algorithms so that users can utilize the data in a proper scientific process.

Significant digits are based on the precision with which you can measure a quantity. I am not well versed on how Dr. Spencer calculates a temperature from the satellite information so I can’t judge the proper number of significant digits that should be used.

However, much of the instrumental measurements have only been recorded to an integer value. Adding 1, 2, or 3 decimal places through averaging is very unscientific. From Washington Univ.; “By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career.” http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm

Anomalies not only hides the actual variance in temperatures but also gives a very, very unscientific view of the precision to which much of the temperature record consists of (integers).

Reply to  Jim Gorman
July 4, 2020 7:23 am

” Adding 1, 2, or 3 decimal places through averaging is very unscientific.”

The inference here is that the output quality of an evaluation doesn’t improve, with more properly vetted data, whether or not that data is error banded. It ALWAYS does…..

Reply to  Jim Gorman
July 4, 2020 10:22 am

bigolbob,

You are a child of the digital age. You’ve been taught that how precise you can calculate something only depends on how many bits you have in the cpu.

I grew up on analog computers. I could set up an experiment and get an output of 4.1v.

I can set it up a second time and get 3.3v. When you average the two you get 3.95v.

Now, does the average actually give you a more accurate answer? No. Depending on the rounding procedure you use you get a value of 3.9v or 4.0 volts. You simply can’t get any closer because of the uncertainties associated with the input values and the output values. Significant digits *DO* impact physical measurements.

Remember, uncertainties GROW when you combine them, they don’t get smaller.

u_total = sqrt(u1^^2 + u2^^2 + u3^^2 ….)

“The inference here is that the output quality of an evaluation doesn’t improve, with more properly vetted data, whether or not that data is error banded. It ALWAYS does…..”

I’m not sure what you mean by “properly vetted data”. You can only improve the output quality of an evaluation if you can lower the uncertainty, you can’t do it by averaging more data points unless those data points are measuring the same thing using the same measurement device multiple times.

Temperature data simply doesn’t meet that qualification.

Reply to  Tim Gorman
July 4, 2020 10:35 am

“You can only improve the output quality of an evaluation if you can lower the uncertainty, you can’t do it by averaging more data points unless those data points are measuring the same thing using the same measurement device multiple times.”

We’re not discussing “averaging”, but yes, you can almost ALWAYS use more data to improve output quality.. We’re discussing areal interpolation of data, to arrive at areal expected values. And then trends in those expected values over time. Yes, the more data that can be used – i.e. data that has error bands similar enough to extant data to reduce output error (as in this case)- ALWAYS improves outputs. I.e. the ad hominem attack on the CLT is not based in fact. But feel free to describe a situation where you can’t, since this contention was first raised, without any technical backup, not by me…

Reply to  Jim Gorman
July 4, 2020 10:28 am

Marlo,

RTC sensors are not linear either. You need a calibration curve that can be applied to the actual output value.

In addition, no temperature measurement device uses RTC outputs directly. The output of an RTC sensor is either a voltage or current value. This has to be converted to a temperature. If the conversion circuitry is temperature dependent at all then that dependency adds another uncertainty to the measurement. If the RTC and/or conversion circuitry is age dependent then that adds yet another uncertainty.

An RTC based temperature measuring device may be more accurate than an old mercury thermometer but that doesn’t mean the RTC device doesn’t have its own uncertainty.

mario lento
Reply to  Tim Gorman
July 4, 2020 11:18 am

Hi Tim: Much of what you say is correct. I do not know about RTC or what that acronym means. I was speaking of RTD’s which use the resistance of platinum in which a tiny current is applied to measure the resistance change by reading the resulting voltage. RTDs are used because the curve is nearly linear over the range that is being measured. A tiny current is applied which mitigates the thermal heating caused by the tiny resistance… They are extremely precise vs other technologies. 4 wire RTDs are the most reliable in that the resistance of the conductor wires is measured and removed from the calculation in real time! That way you are not measuring the varying resistance of the conductor wires.

Reply to  Jim Gorman
July 4, 2020 10:38 am

Ummmm, that should be 4.1v and 3.8v. Avg = 3.95v.

Reply to  Jim Gorman
July 5, 2020 6:47 am

bigoilbob –> “We’re not discussing “averaging”, but yes, you can almost ALWAYS use more data to improve output quality.. We’re discussing areal interpolation of data, to arrive at areal expected values. ”

The only way to “improve output quality” is to measure the SAME THING with the SAME DEVICE and to redo the measurement MULTIPLE times. Assuming that this gives a Gaussian distribution (normal with random errors) you may take an average to obtain a true value. Since you are using the same device, the precision can not be increased by averaging or interpolating. You must use the rules of significant digits to preserve the integrity of the precision of measurement.

Measurements of temperatures at different times are NOT measurements of the same thing. Consequently, there are no random errors to build a statistical distribution that can be used to calculate (or interpolate) a better or higher precision “true value” for either an earlier or later measurement.

Measurements with different devices likewise provide no random errors to build a statistical distribution that can be used to calculate (or interpolate) a better or higher precision “true value” for a different device, period.

Integrity of measurements is an important issue in many endeavors. Climate science seems to have tossed this and uncertainty out of the window in favor of being able to find 1/100ths of change in integer temperatures.

Reply to  Jim Gorman
July 5, 2020 7:06 am

“The only way to “improve output quality” is to measure the SAME THING with the SAME DEVICE and to redo the measurement MULTIPLE times.”

Uh, no. The parameters of measurement accuracy, resolution, are the same for any measurement device/procedure. And they are ALL known. From the thermometers used 100 years ago, to all of the modern electronic and radiative methods, the evaluators are good. They correct for any inherent bias (adjust their sights), and account for the KNOWN collection/instrumental/recording distributions. We end up with data, all having expected values and distributions. If those distributions are not identical, it matters not at all. Same with the weightings that result from spatial interpolation. It all gets evaluated using methodology as old as the oldest data we are now discussing and proven over and over.

“Assuming that this gives a Gaussian distribution (normal with random errors…”

No need for distributions to be Gaussian for stochastic evaluation. ANY distributions can be analyzed, properly, together. And “non random” errors can be easily handled with correlation matrices and inherent bias corrections. We KNOW all about these measurement processes, new and old, and the stochastic evaluative techniques are tried and true.

My colleagues who do geostatistics would laugh you out of the room. We use their outputs as reservoir sim inputs, and have added trillions in value as a result. All with input info a fraction as good as what is available to the folks trying to evaluate climate data…

Willem69
Reply to  Steven Mosher
July 3, 2020 6:36 am

Steven,

‘ yes because its not an average’
If it is not an average then why call it that?

If it is indeed a ‘a prediction or expected value’
Lets call it that then shall we.

So lets make the new catastrophic CC headlines something like; our best guess of the average global temperature last month was about ?? degrees C. With a SD of about 1 degree C.
No decimal places required as we simply don’t know.

As for the UAH Data, if the instruments used allow for the number of decimals it would then be an actual average of the measured data so that would be fine. But some uncertainty information would certainly help!
At least it’s not a made up number consisting of interpolations over vast areas without measurements and/or other (clever?) guesswork presented as an ‘average’ rather than as a guess.

Stay sane,

Willem

Doug Allen
Reply to  Grant A. Brown
July 3, 2020 9:15 am

It’s a construct or constructed average just like the average daily temperature is a construct, usually just an average of the high and low, although it may be the average of 24 temperatures taken at even intervals or in some other way. Nothing at all wrong or unusual about a constructed average.

Reply to  Doug Allen
July 3, 2020 7:14 pm

Is there a way to average integers and arrive at 1/100ths of precision?

Can you average non repeatable measurements and and eliminate uncertainty?

July 2, 2020 2:15 pm

So much for the hottest June evah! Who spreads this BS… GISS?

Reply to  UV Meter
July 2, 2020 2:57 pm

Who spreads this BS… GISS?
Maybe.
Gore, Steyer, Sonos, Mann, Griff, Loydo, definitely.

Reply to  UV Meter
July 2, 2020 4:13 pm

“So much for the hottest June evah!”

Actually, at 0.43°C it is almost the hottest June in the UAH record. Only 2019, at 0.47°C is hotter; all other years are well behind.

Reply to  Nick Stokes
July 2, 2020 4:46 pm

Actually it’s the third warmest – 1998 was 0.57°C.

Though the last 12 months have been warmer than any 12 month period during the 1998 el Niño, and very close to the peak of the 2016 el Niño.

Reply to  Bellman
July 2, 2020 6:38 pm

Yes, that’s right. Here are the top 6 Junes in UAH TLT V6.0, ranked
1998, 0.57
2019, 0.47
2020, 0.43
2016, 0.34
2015, 0.31
2010, 0.31

TC in the OC
Reply to  Nick Stokes
July 2, 2020 7:08 pm

So taking the highest June temperature since the high in 1998 still shows a linear trend for June for the last 21 years of -0.05 C per decade.

Doesn’t look like runaway global warming to me.

Carlo, Monte
Reply to  Nick Stokes
July 3, 2020 4:47 am

So what caused them?

donald penman
Reply to  Nick Stokes
July 3, 2020 6:07 am

Weather models have shown plumes of hot air moving north in both the USA and Europe in June and I am also told in Asia. It is just weather patterns and the continents in the northern hemisphere have warmed because of this. It seems strange to me that the anomaly has cooled from last month but it does not surprise me that it was a hot June

Reply to  Nick Stokes
July 3, 2020 6:28 am

Nick –> Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.

Reply to  Nick Stokes
July 4, 2020 12:22 am

“Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.”
The headline of this article, and every data point in it, quotes temperatures without uncertainty. I quoted Roy’s results too. You’ll have to take that up with him.

Reply to  Nick Stokes
July 7, 2020 7:34 pm

Jim Gorman July 3, 2020 at 6:28 am
Nick –> Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.

RSS who produce the rival MSU/AMSU dataset says the following:
“WHY STUDY THE UNCERTAINTY?

Without realistic uncertainty estimates we are not doing science!
In the past, numerous conclusions have been drawn from MSU/AMSU data with little regard to the long term uncertainty in the data.
Most previous error analyses for MSU/AMSU data sets have focused on decadal-scale trends in global-scale means, while in contrast, many applications are focused on shorter time scales and smaller spatial scales.
Here we describe a comprehensive analysis of the uncertainty in the RSS MSU/AMSU products. The results can be used to evaluate the estimated uncertainty on all relevant temporal and spatial scales.”
http://www.remss.com/measurements/upper-air-temperature/#Uncertainty

Van Doren
Reply to  Bellman
July 2, 2020 8:37 pm

How large is the measurement error again? Oh, Dr. Spencer doesn’t know? Then how can you know it’s the third warmest? You know nothing.

Reply to  Van Doren
July 3, 2020 2:41 pm

Yes, without error bands you can’t calc the probability that it is indeed the warmest, when comparing any two. And I wish that every value in the tables came with them (I looked, but if they are available, link me). Maybe they need to say “probably” the warmest, 3d warmest, etc.

But do you really think that these error bands are not calculable, or calculated? I don’t. Rather, I think they since they are from so many data points, that even if THOSE points have larger errors, the resultant is so small that the probability that a monthly eval ranking is wrong, is quite, quite, small.

Bigger pic, monthly rankings are just attention getters. The money parameters are trends over statistically/climactically significant periods, and their statistical durability. Over those time periods, with every data eval that DOES provide error bands (other temps, sea level, what else?), the trends and their statistical durabilities are changed almost not at all by the monthly error bands.

Reply to  Van Doren
July 4, 2020 12:25 am

“Then how can you know it’s the third warmest? You know nothing.”
So what is the point of this article? Or any of the monthly articles with UAH results? We can’t be certain so we know nothing.

Reply to  Van Doren
July 4, 2020 10:35 am

bigolbob,

“But do you really think that these error bands are not calculable, or calculated? I don’t. Rather, I think they since they are from so many data points, that even if THOSE points have larger errors, the resultant is so small that the probability that a monthly eval ranking is wrong, is quite, quite, small.”

Uncertainties add, they don’t subtract. The more independent data points you have the greater the uncertainty interval becomes.

u_total = sqrt( u1^^2 + u2^^2 + u3^^2 + u4^^2 + u5^^2 + ……)

If you are measuring the same thing multiple times with the same device then the law of large numbers becomes useful. The average gives you a more accurate true value.

But if you are measuring temperatures at different times with different devices at different geographic locations then the law of large numbers doesn’t apply. The uncertainties of each measurement add by the root-mean-square.

Reply to  Tim Gorman
July 4, 2020 10:50 am

“But if you are measuring temperatures at different times with different devices at different geographic locations then the law of large numbers doesn’t apply. The uncertainties of each measurement add by the root-mean-square.”

Yes, in general, it does apply. Temp data with an error band, and with any correlation matrices of those errors, is still, just data. Whether it’s eyeballed, collected electroncally, remotely, whatever, it’s still evaluable data. And it all can be evaluated together.

And just because the values are distributed differently mean nothing w.r.t. their evaluation. I.e. if older values, using older tech has wider (but still known) error bands, they can not only be spatially interpolated at time intervals, but they can also be trended over time. Simply with different temporal error bands for each time. Not only that, but the durability of the resultant trend can also be so calculated. No “apples and oranges” involved. FYI, the big improver here is almost ALWAYS data QUANTITY.

Again, please provide an example of why your claim that the CLT is not generally applicable, is valid.

Reply to  Tim Gorman
July 4, 2020 10:54 am

Just caught this

“Uncertainties add, they don’t subtract.”

So, the CLT is wrong? Please link me to this adder to our knowledge base, since Engineering Statistics 201, way back when.

Reply to  Van Doren
July 5, 2020 7:46 am

bigoilbob –> “So, the CLT is wrong? Please link me to this adder to our knowledge base, since Engineering Statistics 201, way back when.”

(bold by me)
“Laplace and his contemporaries were interested in the theorem primarily because of its importance in [b]repeated measurements of the same quantity[/b]. If the individual measurements could be viewed as approximately independent and identically distributed, then their mean could be approximated by a normal distribution.” From: https://www.britannica.com/science/central-limit-theorem

Please take note of the bolded phrase. It is an extremely important issue with the determination of a “true value” of a measurement.

Another application of the CLT is with sampling. This allows one to determine a mean for a population where only a small part of the population is sampled and where the population is not necessarily normally distributed.

The population of temperature data from a station is the entire population. Sampling a fixed and finite population to determine a mean is worthless. Simply compute the population mean as usual. I have checked this myself. Sampling will only give you the simple population mean.

The real issue is combining station populations with different variances and most do have different variances due to many varying geographical locations.

When you do this you must calculate the combined variance and this is not a simple average of the variances. See this reference. Read this reference for the math behind calculating a combined variance for different populations. https://www.emathzone.com/tutorials/basic-statistics/combined-variance.html

Lastly, I want to reiterate that the “error of the mean” calculation does not allow one to artificially increase the precision of measurements, in other words, add significant digits. It is a statistical descriptor of how close the mean is to the actual mean. It has nothing to do with the significant digits available from measurements.

Reply to  Jim Gorman
July 5, 2020 8:53 am

Are you aware that your combined variance link is exactly what I’ve been saying, all along?

W.r.t. different measurement methods being incompatible for evaluations, you should probably let EVERY petroleum engineer and geoscientist in on it. We routinely combine data gathered from many different sources for a single reservoir evaluation. Sometimes over a dozen. For both rock and fluid properties. We also routinely calculate correlations between those properties (porosity and permeability are probably the 2 most accessible by outsiders), and use these in sim runs. Lots of parameters of success. Good history matches. Money in the corporate coffers….

Reply to  Jim Gorman
July 5, 2020 9:06 am

“The real issue is combining station populations with different variances and most do have different variances due to many varying geographical locations.”

You don’t understand stochastic evaluation. No problem, many don’t. There is no “combination” involved. Rather, every distributed data point is sampled, either randomly, or according to an overlying correlation coefficient. Then, the sample values are evaluated according to what you’re trying to achieve. Then, an output value is found. then it’s done again, with different samples. And again. And again. The process is repeated until the collection of outputs adequately represents output distributions.

Again, different sources, doesn’t matter. Different distribution types, doesn’t matter. Correlations between parameters, if you know them, doesn’t matter.

Reply to  Bellman
July 3, 2020 1:56 am

TC in the OC

“So taking the highest June temperature since the high in 1998 still shows a linear trend for June for the last 21 years of -0.05 C per decade.”
______________________________________________

Not sure where you got that figure. Firstly, it’s 23 years since June 1998. Secondly, the linear trend for June temperatures in UAH since 1998 is +0.12 C/Dec. That’s faster than the full June trend in UAH over their whole record, since June 1979 (+0.11 C/Dec).

Reply to  Nick Stokes
July 2, 2020 4:53 pm

Not quite: June 2016 and June 1998 were also higher. Albeit El Nino years.

But goofy GISS and balmy BEST think we are at a +1.1°C global anomaly from the same 14.0°C baseline that UAH uses. Go figure.

Never fall for bait-click media proclaiming hottest (fill in the blank___________) ever. Never!

NASA needs to give up climate and work on outer space where they belong. Both NOAA and NWS are closer to reality.

Even upstart http://temperature.global/ shows we are at 0.0°C anomaly since 2015 from same baseline…. based on over 60000 T stations – combo land and sea buoys.

Reply to  UV Meter
July 2, 2020 5:31 pm

But goofy GISS and balmy BEST think we are at a +1.1°C global anomaly from the same 14.0°C baseline that UAH uses.

Neither GISS or BEST will have released figures for June yet – that normally happens mid-month.

I don’t know where you get your 1.1°C from – using the same baseline as UAH (1981 – 2010), last month was +0.62°C according to GISS.

Reply to  Bellman
July 2, 2020 5:54 pm

From the GISS website:comment image

This one (2019) is way up at +1.0. More recent 2020 ones (now gone from site) were at +1.1.

These people are nuts. Their alarmist agenda corrupts everything they do.

Reply to  Bellman
July 2, 2020 5:58 pm

From the GISS website

Which clearly states is using the 1951 – 1980 baseline. For obvious reasons UAH don’t use that baseline, they use the 1981 -2010 period.

Reply to  Bellman
July 2, 2020 6:56 pm

Ah-ha… now we’re getting somewhere. So what exactly is the 1951-1980 baseline in °C?

Reply here: _________________________

No one EVER discloses that on their graphs. Why? It must be some oddball value like 13.3°C to match up with 1981-2010 of 14.0°.

This is why I hate undefined anomalies. The old ones simply add alarmism for the purpose of scaring people into accepting “carbon” taxes, etc. No no no.

NOAA and NWS says the last 40 year average is 14.0000°C. We are now only 0.43° above that. Period full stop. Future grand solar minimum will get us back to zero anomoly soon.

B d Clark
Reply to  UV Meter
July 2, 2020 6:58 pm

If not the very weak sc 25 maxima it’s all downhill from now .

Michael Jankowski
Reply to  Nick Stokes
July 2, 2020 4:57 pm

“…Actually, at 0.43°C it is almost the hottest June in the UAH record…”

So what you’re saying is that you agree with, “So much for the hottest June evah!”

B d Clark
Reply to  Nick Stokes
July 2, 2020 4:58 pm

Wrong

” June 2010 UAH Global Temperature Update: +0.44 deg. C”

So not only 2019, June 2009 ties at 0.43c

Reply to  B d Clark
July 2, 2020 5:24 pm

Wrong

You are.

June 2010, +0.31°C
June 2009, -0.16°C

B d Clark
Reply to  Bellman
July 2, 2020 5:30 pm

No I’m not

comment image

Get your facts right bellend

B d Clark
Reply to  Bellman
July 2, 2020 5:47 pm

Your just making data up bellend the figures you produced have nothing to do with UAH at all Roy’s archive why are you misleading people?

Heres the data for 2009 /10 were do you get -0.16?

June 2010 UAH Global Temperature Update: +0.44 deg. CThursday, July 1st, 2010

YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
2010 1 0.648 0.860 0.436 0.681
2010 2 0.603 0.720 0.486 0.791
2010 3 0.653 0.850 0.455 0.726
2010 4 0.501 0.799 0.203 0.633
2010 5 0.534 0.775 0.292 0.708
2010 6 0.436 0.552 0.321 0.475

Reply to  Bellman
July 2, 2020 5:52 pm

Get your facts right bellend

You’re looking at the old version 5 data. I’d hate to think what June 2020 would be like using that obsolete version.

Reply to  Bellman
July 2, 2020 5:56 pm

June 2010 UAH Global Temperature Update

I don’t know how to break this to you but we are now in 2020. Quite a lot’s happened since then.

B d Clark
Reply to  Bellman
July 2, 2020 6:00 pm

Have you tried compairing the uahv6 graph with v5 theres no difference which makes your claim of 0.57 June 1998 suspect as well , you havent even provided a data source.

Reply to  Bellman
July 2, 2020 6:31 pm

you havent even provided a data source.

Data source is in the article:

http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt

and UAH5 and UAH6 are very different – as was discussed when version 6 was first released. For one thing it decreased the rate of warming from 0.14° to 0.114°C / decade.

B d Clark
Reply to  Bellman
July 2, 2020 6:54 pm

Ok so they changed to uahv6 2015 , and never changed the data and graphs in the archive , and no disclaimer on the archive material . That’s misleading , I do stand corrected ,

MarkW
Reply to  Bellman
July 2, 2020 7:06 pm

You’re looking at the old data

Translation: The old data didn’t show what we wanted to see, so we cooked it some more.

Reply to  Bellman
July 3, 2020 5:54 am

MarkW

Translation: The old data didn’t show what we wanted to see, so we cooked it some more.

So you’re accusing Dr Roy Spencer of committing fraud to get the results he wanted? I’m not a fan of Spencer, but I consider that an outrageous accusation.

Reply to  B d Clark
July 2, 2020 5:27 pm

No… the graph shows 2010 (not 2009) ties at +0.43.

Anyway… the point remains valid: June 2020 is NOT the hottest June evah.

When can we expect the headline retractions?

B d Clark
Reply to  UV Meter
July 2, 2020 5:38 pm

No it does not heres the data from Roy’s archive

June 2010 UAH Global Temperature Update: +0.44 deg. CThursday, July 1st, 2010
What cant you understand about the above ?

YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
2010 1 0.648 0.860 0.436 0.681
2010 2 0.603 0.720 0.486 0.791
2010 3 0.653 0.850 0.455 0.726
2010 4 0.501 0.799 0.203 0.633
2010 5 0.534 0.775 0.292 0.708
2010 6 0.436 0.552 0.321 0.475

Read the data 2009 6month 0.43

Reply to  UV Meter
July 2, 2020 5:54 pm

When can we expect the headline retractions?

What headlines? Which data set where they prediction would be the hottest evah? Why can’t they spell?

Reply to  UV Meter
July 2, 2020 6:03 pm

B.d,: I stand corrected. Didn’t have the data chart. Graph looked like 2010.

B d Clark
Reply to  UV Meter
July 2, 2020 6:08 pm

That’s ok, thanks.

Reply to  UV Meter
July 2, 2020 6:11 pm

Wait a minute: Chart data 2009 June shows 0.043 not 0.43. Back to you.

2009 6 0.043 -0.017 0.103 0.110

B d Clark
Reply to  UV Meter
July 2, 2020 6:13 pm

Not on the graph it doesn’t, I dont know why they moved a decimal point over on the writen data .

B d Clark
Reply to  UV Meter
July 2, 2020 6:32 pm

It does not show 0.043 its 0.43 each date eg 1 0 ,2 0 ect has a 0 after the month number

So June reads 6 0. 043

Reply to  UV Meter
July 2, 2020 6:32 pm

B.d.: The graph and written data agree. Look to the right of 2009 and 2010 graph lines…. about 50% over for June. 2009 is close to zero and 2010 is a bit less than halfway to 1. So chart decimal point is OK.

Roy taught me how to chart graphs back in college. Before xls existed lol.

B d Clark
Reply to  UV Meter
July 2, 2020 6:42 pm

Well it’s not see my previous post

Reply to  UV Meter
July 2, 2020 7:36 pm

No… look closely at the graph. Halfway past the 2009 line there is a blue circle BELOW 0.1. That is the June 0.043° value. Many of the other 2009 data points are also quite low… in the 0.1 – 0.3 range. June is the coolest point.

YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506

Reply to  Nick Stokes
July 2, 2020 7:36 pm

Strokes – The planet has been warming since the 1690s, so there will be many hottest ‘evah months until the warming trend ends.

Record high months exist because all global average temperature compilations were DURING a warming trend.

Based on ice core proxies, the warming trend will end and a cooling trend will begin someday.

Perhaps when the Holocene interglacial ends.

Then climate alarmists like you can warn of the coming global cooling crisis.

Earth’s climate is wonderful, and has been getting better for 325+ years — why don’t you find a real crisis to write about?

Reply to  Nick Stokes
July 2, 2020 7:37 pm

”Actually, at 0.43°C it is almost the hottest June in the UAH record”

Because it’s still on the way down from a 2019 high.

sycomputing
Reply to  Nick Stokes
July 2, 2020 8:59 pm

. . . it is almost the hottest June in the UAH record.

lol

DocSiders
Reply to  Nick Stokes
July 3, 2020 6:29 am

It’s been warming for over 150 years coming out of the LIA. Tide gauge trends indicate a slow steady *climate level trend* (leaving out the +/- 0.35° “noise” that Alarmists use to create panic).

Having frequent new high readings IS THE EXPECTED result while long term trends continue. If you call out new highs during a trend, you look stupid. Then Alarmists propagandists purposefully confuse these new high readings with “all time” (meaning the last 100 years) local high temperatures which are more significant…at least to the locals.

Speaking of long term trends…statistically, you don’t get to pick some point along the “trending” period and then assign a new cause to it…at least not without identifying the earlier cause (before ~ ~1945…which has never been explained) and then showing how the original (still unidentified) cause ceased…and the new cause emerged AT A SINGLE POINT IN TIME. That’s especially difficult when all the tide gauges in the world indicate that the Climate level “BEFORE” and “AFTER” trends are exactly the same.

The Null Hypothesis:

“The Modern Warming is the continuation of a longer term trend”.

THAT IS WHERE THE SCIENTIFIC METHOD DEMANDS the arguments start (or thereabouts). *Amazingly, that first critical step in the normal application of the scientific method has yet to be done. That’s because this isn’t a good and credible scientific investigation. It’s Political Advocacy using corrupted science.

Every Institution Leftists control has been corrupted. And that’s most of our institutions.

Reply to  UV Meter
July 2, 2020 5:33 pm

Out of interest, who has predicted June will be the hottest “evah”?

Spetzer86
July 2, 2020 2:48 pm

I’m just shocked they can’t generate data that maintains a trend line any better than that.

Reply to  Spetzer86
July 2, 2020 4:14 pm

“I’m just shocked they can’t generate data”
They? Roy?

Reply to  Nick Stokes
July 2, 2020 5:31 pm

Not Roy. The UAH satellite. Roy just reports the results.

Don’t trust Roy? Then try appointing Algore ha ha.

Reply to  UV Meter
July 2, 2020 6:01 pm

Roy just reports the results.

I’m pretty sure he’s also responsible for all the calculations as well.

Robber
July 2, 2020 2:59 pm

Come on, it’s a climate emergency. Didn’t you get the memo? /sarc
And now for the good news. The world is on track for global warming of just 1.4 C per century, meeting the IPCC’s target, so cancel the next boondoggle, and all the IPCC members and their “experts” can pack their bags and take a slow boat home.

mario lento
Reply to  Robber
July 2, 2020 3:03 pm

“The world is on track for global warming of just 1.4 C per century, meeting the IPCC’s target”

I think the target was 2C… so beating instead of meeting me thinks… 🙂

michael hart
Reply to  mario lento
July 2, 2020 3:14 pm

We all know the response of the IPCC to insufficient warming:

Adam
Reply to  mario lento
July 2, 2020 3:23 pm

Global population will peak in 40 years, then a steep decline. At 0.14 degrees C per decade, there is no problem.

Reply to  mario lento
July 3, 2020 2:50 am

mario lento

“I think the target was 2C… so beating instead of meeting me thinks…”
_____________________________________

I believe you may be referring to the 2007 IPCC projection, which stated:

“For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.” https://archive.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html

According to UAH, the rate of warming since 2007 is presently 0.3°C per decade: https://woodfortrees.org/plot/uah6/from:2007/plot/uah6/from:2007/trend

Warming will actually have to slow down over the next few years for the IPCC 2007 projection to be right!

mario lento
Reply to  TheFinalNail
July 4, 2020 12:26 am

Yes: They often write “…far overshooting a global target of limiting the increase to 2C (3.6F) or less, the U.N. World Meteorological…”

Charles Wardrop
July 2, 2020 3:01 pm

If the Greenies deny the life-giving forces of sunshine and carbon dioxide, why do they not extend their fears and condemnations to, say, seawater and ethyl alcohol: both of these, though products of nature, can kill one way or another?

Perhaps because their dangers cannot be analysed and timed by crude and usually misleading computer programs, as used to predict weather forecasts, whose unreliability is notorious.

The Greens’ confidence in computer programs and their interpretation and politicisation and moneymaking potential explains their usually erroneous predictions and unwillingness to accept alternative opinion.

July 2, 2020 3:21 pm

Blizzard in China, https://youtu.be/0SAR0abWr1Y
June 29

B d Clark
Reply to  Krishna Gans
July 2, 2020 3:37 pm

400 odd sheep dead just been sheared for the summer.

Reply to  Krishna Gans
July 2, 2020 4:43 pm

For the last 2 months China has had large streams of moisture moving across their nation. The flow is coming from this surface wind pattern, … https://earth.nullschool.net/#current/wind/surface/level/overlay=total_cloud_water/orthographic=-273.49,8.73,672/loc=66.365,9.187

The moisture stream then continues on into Canada/Alaska. In the several months prior to May the same stream carried persistent storm tracks into the Pacific Northwest states. It was an unusually wet spring around here. The blackberry bushes are filled with green berries that should bear abundant fruit into the early fall.

Ron Long
July 2, 2020 3:31 pm

I don’t need to see any data to know that I’m freezing my ass off in Argentina.

Norman
Reply to  Ron Long
July 2, 2020 4:18 pm

Put some clothes on.

Joe Perth
Reply to  Ron Long
July 2, 2020 6:24 pm

There’s no such a thing as bad weather, just bad clothing. (Norwegian saying, apparently).

Pretty cold in Perth Australia too, even with clothes on. But it is winter.

Reply to  Joe Perth
July 3, 2020 7:09 am

German too 😀

jaime
July 2, 2020 4:25 pm

so do I in CVhile

Loren C. Wilson
Reply to  jaime
July 2, 2020 5:10 pm

Hola, hermano. I spent two years living in and near Santiago many years ago.. I loved every minute of living in Chile.

July 2, 2020 5:35 pm

Here’s another perspective:

comment image

Reply to  Robert Kernodle
July 3, 2020 2:20 am

I guess you could also make a similar looking chart of, say, human body temperature, stretching from lethally cold on the left to lethally hot on the right, yet make it look innocuous by stretching the y-axis sufficiently, as you have done here.

Reply to  TheFinalNail
July 3, 2020 8:26 am

Yes, I could most certainly do that, if I were inclined to equate the human body to the Earth/Atmosphere system, loosing all sense of context, neglecting the vast differences between bodily physiological processes and terrestrial physical processes, but, of course, that’s not what I am doing. (^_^)

I could also do it with plutonium exposure and not have to bother with the y axis at all, which, of course, would mean that I was now equating plutonium exposure to, say, carbon dioxide exposure, again loosing all sense of context and the specific physical laws applying specifically to those specific contexts.

Within the context of temperature change in the Earth/atmosphere system, where human life is concerned, even a temperature anomaly that varies within one degree over decades is still extraordinarily stable, and that’s what I was getting at.

angech
July 2, 2020 5:41 pm

On verge of a La Nina.
Surface measurement temps falling wildly.
Does the Version 6.0 global average lower tropospheric temperature (LT) anomaly have a lag that Roy is not mentioning.?
Are the satellite drifts needing updating??
Bring on V7

mikewaite
Reply to  angech
July 3, 2020 12:29 am

Verge of La Nina? I noticed 2 days ago that the ENSO meter here had suddenly increased sharply. No idea what that implies but a bit surprised no one remarked on it . But of course the last few months have been rather distracting .

Richard M
July 2, 2020 8:44 pm

Keep in mind that satellite data usually lags 3-4 month behind when it comes to ENSO effects. June is affected by the ENSO conditions around March. We were still under the influence of El Nino at that time. We have to wait until August before we see post El Nino data.

But, even then the effects of the El Nino often hang around. It took over a year for the 2016 El Nino effects to disappear. Now, if a La Nina does show up later this year that would accelerate the process

azgrandma
July 2, 2020 9:48 pm

Meanwhile in the desert we about froze last winter, many things did poorly in the greenhouse. Had a cool spring and summer has been cooler than average most of the time. We may not even get over 112 deg this summer.

July 2, 2020 11:59 pm

The Three Gorges dam is being threatened by the heavy continuous rains in southern China, … https://www.taiwannews.com.tw/en/news/3955518

July 3, 2020 4:54 am

Although this is a discussion about Covid 19 there is a salient point made at around 5 minutes into the discussion about the validity of models, modification of models to match observed data and their relative use or not for hindcasting.

July 3, 2020 7:03 am

If these temps are to be considered measurements there should be uncertainty associated with them. I was unable to find any info concerning uncertainty at UAH. If there is no uncertainty, then they cannot be considered to be absolute measurements. Even anomaly values would be questionable when compared to others. UAH could only be used by itself and not in conjunction with any other temperature database.

SAMURAI
July 3, 2020 6:05 pm

From looking at the global SST map, it seems very likely the Atlantic is entering its 30-year cool cycle , and a La Niña cycle is developing. Moreover, about 1/3rd of the South Indian, South Pacific and Southern Oceans are cooler than normal.

Therefore, global temps will likely be falling for the next 2 years from the La Niña cycle, and if the AMO does enter its 30-year cool cycle, we could have 30 years of global cooling.

https://www.ospo.noaa.gov/Products/ocean/sst/anomaly/

From June of this year, NOAA nefariously added a +-0.2C gray scale to the global SST map to “hide the decline“, and when I e-mailed NOAA asking them why they started this, they said they’ve always done this (a lie), and that “basically +-0.2 is the same as 0. so there is no reason to differentiate.” …. Oh, really? Try that “logic” with the IRS and see what happens…

“HO-HO HEY-HEY DEFUND THE NOAA!!!!”

John Finn
Reply to  SAMURAI
July 4, 2020 4:20 am

if the AMO does enter its 30-year cool cycle, we could have 30 years of global cooling.

The AMO was in its cool cycle between 1967 & 1997. I’m not sure there was much global cooling during that period.

https://en.wikipedia.org/wiki/Atlantic_multidecadal_oscillation#/media/File:Atlantic_Multidecadal_Oscillation.svg

Do you just grab at any passing straw in the hope one might explain 50 years of warming. The net effect of Ocean oscillations is zero. Air temperatures respond temporarily to these events. They have no long term influence as they don’t add or reduce earth’s heat energy.