Reposted from Dr. Roy Spencer’s Blog
July 2nd, 2020 by Roy W. Spencer, Ph. D.
The Version 6.0 global average lower tropospheric temperature (LT) anomaly for June, 2020 was +0.43 deg. C, down from the May, 2020 value of +0.54 deg. C.
The linear warming trend since January, 1979 is +0.14 C/decade (+0.12 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).
Various regional LT departures from the 30-year (1981-2010) average for the last 18 months are:
YEAR MO GLOBE NHEM. SHEM. TROPIC USA48 ARCTIC AUST
2019 01 +0.38 +0.35 +0.41 +0.36 +0.53 -0.14 +1.15
2019 02 +0.37 +0.47 +0.28 +0.43 -0.02 +1.05 +0.05
2019 03 +0.34 +0.44 +0.25 +0.41 -0.55 +0.97 +0.58
2019 04 +0.44 +0.38 +0.51 +0.54 +0.49 +0.93 +0.91
2019 05 +0.32 +0.29 +0.35 +0.39 -0.61 +0.99 +0.38
2019 06 +0.47 +0.42 +0.52 +0.64 -0.64 +0.91 +0.35
2019 07 +0.38 +0.33 +0.44 +0.45 +0.11 +0.34 +0.87
2019 08 +0.39 +0.38 +0.39 +0.42 +0.17 +0.44 +0.23
2019 09 +0.61 +0.64 +0.59 +0.60 +1.14 +0.75 +0.57
2019 10 +0.46 +0.64 +0.27 +0.30 -0.03 +1.00 +0.49
2019 11 +0.55 +0.56 +0.54 +0.55 +0.21 +0.56 +0.38
2019 12 +0.56 +0.61 +0.50 +0.58 +0.92 +0.66 +0.94
2020 01 +0.56 +0.60 +0.53 +0.61 +0.73 +0.12 +0.66
2020 02 +0.76 +0.96 +0.55 +0.76 +0.38 +0.02 +0.30
2020 03 +0.48 +0.61 +0.34 +0.63 +1.09 -0.72 +0.16
2020 04 +0.38 +0.43 +0.34 +0.45 -0.59 +1.03 +0.97
2020 05 +0.54 +0.60 +0.49 +0.66 +0.17 +1.15 -0.15
2020 06 +0.43 +0.45 +0.41 +0.46 +0.38 +0.80 +1.20
The UAH LT global gridpoint anomaly image for June, 2020 should be available within the next week here.
The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:
Lower Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause: http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere: http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt
Looks scarier than Covid… we’re all going to drown with the virus, and there will be a double death toll count by then. One for death by climate change!
Does anyone actually believe that it is possible to calculate the average global temperature for a period of a month, down to the hundredths of a degree, even with satellites and supercomputers?
Impossible for global average. And, the precision is not down to 1/100th for RTD used either.
“ Impossible for global average. ”
Not really …. because all one needs is two (2) or more “numbers” to calculate an average.
I want to know, …. what was the “global average cloud cover” for the month of June for each of the past 20 years?
Iffen you don’t know what the “global average cloud cover” is/was, …… then any/all “global average temperature” calculations are little more than bogus propaganda (junk science) reporting.
Yes you can calculate anything. It might and probably does not tell you what you might think it does! Anyway, yes I agree with you. So the result is that you get an average number of the two numbers in your example… but one cannot interpret that to mean the we know the average temperature, we only know the average of two numbers.
“Does anyone actually believe that it is possible to calculate the average global temperature for a period of a month, down to the hundredths of a degree, even with satellites and supercomputers?”
yes because its not an average. Its a prediction or expected value.
That is, it represents the best estimate of what you would measure if you used a prfect instrument at every location.
not an average in the usual sense of the term.
Yes of course the satellites use models to predict the actual temperature at any point at a certain time but it is still believed to be the actual temperature at that point and time. A bigger problem seems to me the period of time over which it is measured. It is averaged monthly temperature but it could just as well be averaged daily or averaged second by second. I wonder if we were to average the 500mlb height values over the globe for a month and the temperature they implied how close would we get to the satellite temperature for that month.
Mosher –>> “not an average in the usual sense of the term.”
Perhaps it should then be called Global Predicted Temperature (GPT) rather than GAT!
If you follow the accepted SCIENTIFIC practice of SIGNIFICANT DIGITS there is no way to increase the precision of a temperature regardless of the mathematics (averaging or predicting) performed upon it. A great example is an average of 7° and 3°. Who decides how many decimal places to show and why?
Too many scientists use the ‘error of the mean’ to say that defines the precision of the mean value. It does not. It is a statistical descriptor of how close the mean to the true mean. It is NOT a descriptor of the precision of the mean!
While we’re at it, the ‘error of the mean’ does not describe nor affect uncertainty. Too many scientists quote the CLT and divide by √n when determining uncertainty. Totally wrong. If uncertainty budgets and propagation thru the calculations are too difficult the GUM allows standard deviation to be quoted. Although propagating variation of different distributions is not easy either.
“If uncertainty budgets and propagation thru the calculations are too difficult the GUM allows standard deviation to be quoted. ”
When would they be “too difficult”? You know how you spatially interpolate. You know the uncertainty of every value you include. You have a procedure to knock out suspicious outliers. You know enough about the data collected, past and present, to build correlation matrices of those uncertainties.
I probably have the computing horsepower on my home laptop, built for engineering calcs. If not, then they could tap any one of the tens of thousands of systems at most national labs, major corporations, and/or educational institutions that do. Any reservoir engineering modeler, in any large oil company HQ, working on this data set instead of the (much lower quality) reservoir data she is used to, could do it. Including a distribution outputs.
Sorry, not buyin’ it…
Jim,
What is the answer to how many significant digits we should be stating for “average” global temperature and the associated uncertainty?
I agree with Grant and Mario that 1/100 degree for a monthly average calculation feels inappropriate. It seems there are so many issues regarding an “average” calculation with accuracy, preciseness, frequency of measurements, grid size, extrapolation/interpolation of unmeasured cells, heat island effect, elevation/altitude issues, data homogenization…
Dr. Spencer’s stated average of 0.54 degrees for a monthly global average anomaly. That 0.04 degree significant digit seems inappropriate. And I agree with you that uncertainty should be stated.
Your thoughts?
I agree with your synopsis. Mosher is right in that, calculations can provide a precision to fractional decimal points… however, as you point out, without the degree of uncertainty, it’s value could be misinterpreted. There is no global temperature, however, if the measurement criteria is wide and repeatable, the changing numbers help us see what may be happening. What makes it tough is that temperature and energy change forms… e.g. warm air temperature has much more [latent]energy in it than drier air at the same temperature. So temperature without moisture content complicates what we derive by simply looking at temperature. Temperature sensing devices, the best of which are RTD, platinum sensors, are the best electronic devices we have, are at best are 0.1C. They do need to be calibrated to achieve that too.
RelPerm –> Maybe I didn’t make my criticism very well. It’s basically that UAH is done so differently from actual measured temperatures that extreme caution is needed when comparing them.
Dr. Spencer needs to develop and publish the uncertainty inherent in his algorithms so that users can utilize the data in a proper scientific process.
Significant digits are based on the precision with which you can measure a quantity. I am not well versed on how Dr. Spencer calculates a temperature from the satellite information so I can’t judge the proper number of significant digits that should be used.
However, much of the instrumental measurements have only been recorded to an integer value. Adding 1, 2, or 3 decimal places through averaging is very unscientific. From Washington Univ.; “By using significant figures, we can show how precise a number is. If we express a number beyond the place to which we have actually measured (and are therefore certain of), we compromise the integrity of what this number is representing. It is important after learning and understanding significant figures to use them properly throughout your scientific career.” http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm
Anomalies not only hides the actual variance in temperatures but also gives a very, very unscientific view of the precision to which much of the temperature record consists of (integers).
” Adding 1, 2, or 3 decimal places through averaging is very unscientific.”
The inference here is that the output quality of an evaluation doesn’t improve, with more properly vetted data, whether or not that data is error banded. It ALWAYS does…..
bigolbob,
You are a child of the digital age. You’ve been taught that how precise you can calculate something only depends on how many bits you have in the cpu.
I grew up on analog computers. I could set up an experiment and get an output of 4.1v.
I can set it up a second time and get 3.3v. When you average the two you get 3.95v.
Now, does the average actually give you a more accurate answer? No. Depending on the rounding procedure you use you get a value of 3.9v or 4.0 volts. You simply can’t get any closer because of the uncertainties associated with the input values and the output values. Significant digits *DO* impact physical measurements.
Remember, uncertainties GROW when you combine them, they don’t get smaller.
u_total = sqrt(u1^^2 + u2^^2 + u3^^2 ….)
“The inference here is that the output quality of an evaluation doesn’t improve, with more properly vetted data, whether or not that data is error banded. It ALWAYS does…..”
I’m not sure what you mean by “properly vetted data”. You can only improve the output quality of an evaluation if you can lower the uncertainty, you can’t do it by averaging more data points unless those data points are measuring the same thing using the same measurement device multiple times.
Temperature data simply doesn’t meet that qualification.
“You can only improve the output quality of an evaluation if you can lower the uncertainty, you can’t do it by averaging more data points unless those data points are measuring the same thing using the same measurement device multiple times.”
We’re not discussing “averaging”, but yes, you can almost ALWAYS use more data to improve output quality.. We’re discussing areal interpolation of data, to arrive at areal expected values. And then trends in those expected values over time. Yes, the more data that can be used – i.e. data that has error bands similar enough to extant data to reduce output error (as in this case)- ALWAYS improves outputs. I.e. the ad hominem attack on the CLT is not based in fact. But feel free to describe a situation where you can’t, since this contention was first raised, without any technical backup, not by me…
Marlo,
RTC sensors are not linear either. You need a calibration curve that can be applied to the actual output value.
In addition, no temperature measurement device uses RTC outputs directly. The output of an RTC sensor is either a voltage or current value. This has to be converted to a temperature. If the conversion circuitry is temperature dependent at all then that dependency adds another uncertainty to the measurement. If the RTC and/or conversion circuitry is age dependent then that adds yet another uncertainty.
An RTC based temperature measuring device may be more accurate than an old mercury thermometer but that doesn’t mean the RTC device doesn’t have its own uncertainty.
Hi Tim: Much of what you say is correct. I do not know about RTC or what that acronym means. I was speaking of RTD’s which use the resistance of platinum in which a tiny current is applied to measure the resistance change by reading the resulting voltage. RTDs are used because the curve is nearly linear over the range that is being measured. A tiny current is applied which mitigates the thermal heating caused by the tiny resistance… They are extremely precise vs other technologies. 4 wire RTDs are the most reliable in that the resistance of the conductor wires is measured and removed from the calculation in real time! That way you are not measuring the varying resistance of the conductor wires.
Ummmm, that should be 4.1v and 3.8v. Avg = 3.95v.
bigoilbob –> “We’re not discussing “averaging”, but yes, you can almost ALWAYS use more data to improve output quality.. We’re discussing areal interpolation of data, to arrive at areal expected values. ”
The only way to “improve output quality” is to measure the SAME THING with the SAME DEVICE and to redo the measurement MULTIPLE times. Assuming that this gives a Gaussian distribution (normal with random errors) you may take an average to obtain a true value. Since you are using the same device, the precision can not be increased by averaging or interpolating. You must use the rules of significant digits to preserve the integrity of the precision of measurement.
Measurements of temperatures at different times are NOT measurements of the same thing. Consequently, there are no random errors to build a statistical distribution that can be used to calculate (or interpolate) a better or higher precision “true value” for either an earlier or later measurement.
Measurements with different devices likewise provide no random errors to build a statistical distribution that can be used to calculate (or interpolate) a better or higher precision “true value” for a different device, period.
Integrity of measurements is an important issue in many endeavors. Climate science seems to have tossed this and uncertainty out of the window in favor of being able to find 1/100ths of change in integer temperatures.
“The only way to “improve output quality” is to measure the SAME THING with the SAME DEVICE and to redo the measurement MULTIPLE times.”
Uh, no. The parameters of measurement accuracy, resolution, are the same for any measurement device/procedure. And they are ALL known. From the thermometers used 100 years ago, to all of the modern electronic and radiative methods, the evaluators are good. They correct for any inherent bias (adjust their sights), and account for the KNOWN collection/instrumental/recording distributions. We end up with data, all having expected values and distributions. If those distributions are not identical, it matters not at all. Same with the weightings that result from spatial interpolation. It all gets evaluated using methodology as old as the oldest data we are now discussing and proven over and over.
“Assuming that this gives a Gaussian distribution (normal with random errors…”
No need for distributions to be Gaussian for stochastic evaluation. ANY distributions can be analyzed, properly, together. And “non random” errors can be easily handled with correlation matrices and inherent bias corrections. We KNOW all about these measurement processes, new and old, and the stochastic evaluative techniques are tried and true.
My colleagues who do geostatistics would laugh you out of the room. We use their outputs as reservoir sim inputs, and have added trillions in value as a result. All with input info a fraction as good as what is available to the folks trying to evaluate climate data…
Steven,
‘ yes because its not an average’
If it is not an average then why call it that?
If it is indeed a ‘a prediction or expected value’
Lets call it that then shall we.
So lets make the new catastrophic CC headlines something like; our best guess of the average global temperature last month was about ?? degrees C. With a SD of about 1 degree C.
No decimal places required as we simply don’t know.
As for the UAH Data, if the instruments used allow for the number of decimals it would then be an actual average of the measured data so that would be fine. But some uncertainty information would certainly help!
At least it’s not a made up number consisting of interpolations over vast areas without measurements and/or other (clever?) guesswork presented as an ‘average’ rather than as a guess.
Stay sane,
Willem
It’s a construct or constructed average just like the average daily temperature is a construct, usually just an average of the high and low, although it may be the average of 24 temperatures taken at even intervals or in some other way. Nothing at all wrong or unusual about a constructed average.
Is there a way to average integers and arrive at 1/100ths of precision?
Can you average non repeatable measurements and and eliminate uncertainty?
So much for the hottest June evah! Who spreads this BS… GISS?
Who spreads this BS… GISS?
Maybe.
Gore, Steyer, Sonos, Mann, Griff, Loydo, definitely.
“So much for the hottest June evah!”
Actually, at 0.43°C it is almost the hottest June in the UAH record. Only 2019, at 0.47°C is hotter; all other years are well behind.
Actually it’s the third warmest – 1998 was 0.57°C.
Though the last 12 months have been warmer than any 12 month period during the 1998 el Niño, and very close to the peak of the 2016 el Niño.
Yes, that’s right. Here are the top 6 Junes in UAH TLT V6.0, ranked
1998, 0.57
2019, 0.47
2020, 0.43
2016, 0.34
2015, 0.31
2010, 0.31
So taking the highest June temperature since the high in 1998 still shows a linear trend for June for the last 21 years of -0.05 C per decade.
Doesn’t look like runaway global warming to me.
So what caused them?
Weather models have shown plumes of hot air moving north in both the USA and Europe in June and I am also told in Asia. It is just weather patterns and the continents in the northern hemisphere have warmed because of this. It seems strange to me that the anomaly has cooled from last month but it does not surprise me that it was a hot June
Nick –> Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.
“Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.”
The headline of this article, and every data point in it, quotes temperatures without uncertainty. I quoted Roy’s results too. You’ll have to take that up with him.
Jim Gorman July 3, 2020 at 6:28 am
Nick –> Each on of those measurements should have an uncertainty associated with them. It is not scientific to quote them as both utterly accurate and precise.
RSS who produce the rival MSU/AMSU dataset says the following:
“WHY STUDY THE UNCERTAINTY?
Without realistic uncertainty estimates we are not doing science!
In the past, numerous conclusions have been drawn from MSU/AMSU data with little regard to the long term uncertainty in the data.
Most previous error analyses for MSU/AMSU data sets have focused on decadal-scale trends in global-scale means, while in contrast, many applications are focused on shorter time scales and smaller spatial scales.
Here we describe a comprehensive analysis of the uncertainty in the RSS MSU/AMSU products. The results can be used to evaluate the estimated uncertainty on all relevant temporal and spatial scales.”
http://www.remss.com/measurements/upper-air-temperature/#Uncertainty
How large is the measurement error again? Oh, Dr. Spencer doesn’t know? Then how can you know it’s the third warmest? You know nothing.
Yes, without error bands you can’t calc the probability that it is indeed the warmest, when comparing any two. And I wish that every value in the tables came with them (I looked, but if they are available, link me). Maybe they need to say “probably” the warmest, 3d warmest, etc.
But do you really think that these error bands are not calculable, or calculated? I don’t. Rather, I think they since they are from so many data points, that even if THOSE points have larger errors, the resultant is so small that the probability that a monthly eval ranking is wrong, is quite, quite, small.
Bigger pic, monthly rankings are just attention getters. The money parameters are trends over statistically/climactically significant periods, and their statistical durability. Over those time periods, with every data eval that DOES provide error bands (other temps, sea level, what else?), the trends and their statistical durabilities are changed almost not at all by the monthly error bands.
“Then how can you know it’s the third warmest? You know nothing.”
So what is the point of this article? Or any of the monthly articles with UAH results? We can’t be certain so we know nothing.
bigolbob,
“But do you really think that these error bands are not calculable, or calculated? I don’t. Rather, I think they since they are from so many data points, that even if THOSE points have larger errors, the resultant is so small that the probability that a monthly eval ranking is wrong, is quite, quite, small.”
Uncertainties add, they don’t subtract. The more independent data points you have the greater the uncertainty interval becomes.
u_total = sqrt( u1^^2 + u2^^2 + u3^^2 + u4^^2 + u5^^2 + ……)
If you are measuring the same thing multiple times with the same device then the law of large numbers becomes useful. The average gives you a more accurate true value.
But if you are measuring temperatures at different times with different devices at different geographic locations then the law of large numbers doesn’t apply. The uncertainties of each measurement add by the root-mean-square.
“But if you are measuring temperatures at different times with different devices at different geographic locations then the law of large numbers doesn’t apply. The uncertainties of each measurement add by the root-mean-square.”
Yes, in general, it does apply. Temp data with an error band, and with any correlation matrices of those errors, is still, just data. Whether it’s eyeballed, collected electroncally, remotely, whatever, it’s still evaluable data. And it all can be evaluated together.
And just because the values are distributed differently mean nothing w.r.t. their evaluation. I.e. if older values, using older tech has wider (but still known) error bands, they can not only be spatially interpolated at time intervals, but they can also be trended over time. Simply with different temporal error bands for each time. Not only that, but the durability of the resultant trend can also be so calculated. No “apples and oranges” involved. FYI, the big improver here is almost ALWAYS data QUANTITY.
Again, please provide an example of why your claim that the CLT is not generally applicable, is valid.
Just caught this
“Uncertainties add, they don’t subtract.”
So, the CLT is wrong? Please link me to this adder to our knowledge base, since Engineering Statistics 201, way back when.
bigoilbob –> “So, the CLT is wrong? Please link me to this adder to our knowledge base, since Engineering Statistics 201, way back when.”
(bold by me)
“Laplace and his contemporaries were interested in the theorem primarily because of its importance in [b]repeated measurements of the same quantity[/b]. If the individual measurements could be viewed as approximately independent and identically distributed, then their mean could be approximated by a normal distribution.” From: https://www.britannica.com/science/central-limit-theorem
Please take note of the bolded phrase. It is an extremely important issue with the determination of a “true value” of a measurement.
Another application of the CLT is with sampling. This allows one to determine a mean for a population where only a small part of the population is sampled and where the population is not necessarily normally distributed.
The population of temperature data from a station is the entire population. Sampling a fixed and finite population to determine a mean is worthless. Simply compute the population mean as usual. I have checked this myself. Sampling will only give you the simple population mean.
The real issue is combining station populations with different variances and most do have different variances due to many varying geographical locations.
When you do this you must calculate the combined variance and this is not a simple average of the variances. See this reference. Read this reference for the math behind calculating a combined variance for different populations. https://www.emathzone.com/tutorials/basic-statistics/combined-variance.html
Lastly, I want to reiterate that the “error of the mean” calculation does not allow one to artificially increase the precision of measurements, in other words, add significant digits. It is a statistical descriptor of how close the mean is to the actual mean. It has nothing to do with the significant digits available from measurements.
Are you aware that your combined variance link is exactly what I’ve been saying, all along?
W.r.t. different measurement methods being incompatible for evaluations, you should probably let EVERY petroleum engineer and geoscientist in on it. We routinely combine data gathered from many different sources for a single reservoir evaluation. Sometimes over a dozen. For both rock and fluid properties. We also routinely calculate correlations between those properties (porosity and permeability are probably the 2 most accessible by outsiders), and use these in sim runs. Lots of parameters of success. Good history matches. Money in the corporate coffers….
“The real issue is combining station populations with different variances and most do have different variances due to many varying geographical locations.”
You don’t understand stochastic evaluation. No problem, many don’t. There is no “combination” involved. Rather, every distributed data point is sampled, either randomly, or according to an overlying correlation coefficient. Then, the sample values are evaluated according to what you’re trying to achieve. Then, an output value is found. then it’s done again, with different samples. And again. And again. The process is repeated until the collection of outputs adequately represents output distributions.
Again, different sources, doesn’t matter. Different distribution types, doesn’t matter. Correlations between parameters, if you know them, doesn’t matter.
TC in the OC
“So taking the highest June temperature since the high in 1998 still shows a linear trend for June for the last 21 years of -0.05 C per decade.”
______________________________________________
Not sure where you got that figure. Firstly, it’s 23 years since June 1998. Secondly, the linear trend for June temperatures in UAH since 1998 is +0.12 C/Dec. That’s faster than the full June trend in UAH over their whole record, since June 1979 (+0.11 C/Dec).
Not quite: June 2016 and June 1998 were also higher. Albeit El Nino years.
But goofy GISS and balmy BEST think we are at a +1.1°C global anomaly from the same 14.0°C baseline that UAH uses. Go figure.
Never fall for bait-click media proclaiming hottest (fill in the blank___________) ever. Never!
NASA needs to give up climate and work on outer space where they belong. Both NOAA and NWS are closer to reality.
Even upstart http://temperature.global/ shows we are at 0.0°C anomaly since 2015 from same baseline…. based on over 60000 T stations – combo land and sea buoys.
“But goofy GISS and balmy BEST think we are at a +1.1°C global anomaly from the same 14.0°C baseline that UAH uses.”
Neither GISS or BEST will have released figures for June yet – that normally happens mid-month.
I don’t know where you get your 1.1°C from – using the same baseline as UAH (1981 – 2010), last month was +0.62°C according to GISS.
From the GISS website:

This one (2019) is way up at +1.0. More recent 2020 ones (now gone from site) were at +1.1.
These people are nuts. Their alarmist agenda corrupts everything they do.
“From the GISS website”
Which clearly states is using the 1951 – 1980 baseline. For obvious reasons UAH don’t use that baseline, they use the 1981 -2010 period.
Ah-ha… now we’re getting somewhere. So what exactly is the 1951-1980 baseline in °C?
Reply here: _________________________
No one EVER discloses that on their graphs. Why? It must be some oddball value like 13.3°C to match up with 1981-2010 of 14.0°.
This is why I hate undefined anomalies. The old ones simply add alarmism for the purpose of scaring people into accepting “carbon” taxes, etc. No no no.
NOAA and NWS says the last 40 year average is 14.0000°C. We are now only 0.43° above that. Period full stop. Future grand solar minimum will get us back to zero anomoly soon.
If not the very weak sc 25 maxima it’s all downhill from now .
“…Actually, at 0.43°C it is almost the hottest June in the UAH record…”
So what you’re saying is that you agree with, “So much for the hottest June evah!”
Wrong
” June 2010 UAH Global Temperature Update: +0.44 deg. C”
So not only 2019, June 2009 ties at 0.43c
“Wrong”
You are.
June 2010, +0.31°C
June 2009, -0.16°C
No I’m not
Get your facts right bellend
Your just making data up bellend the figures you produced have nothing to do with UAH at all Roy’s archive why are you misleading people?
Heres the data for 2009 /10 were do you get -0.16?
June 2010 UAH Global Temperature Update: +0.44 deg. CThursday, July 1st, 2010
YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
2010 1 0.648 0.860 0.436 0.681
2010 2 0.603 0.720 0.486 0.791
2010 3 0.653 0.850 0.455 0.726
2010 4 0.501 0.799 0.203 0.633
2010 5 0.534 0.775 0.292 0.708
2010 6 0.436 0.552 0.321 0.475
“Get your facts right bellend”
You’re looking at the old version 5 data. I’d hate to think what June 2020 would be like using that obsolete version.
“June 2010 UAH Global Temperature Update”
I don’t know how to break this to you but we are now in 2020. Quite a lot’s happened since then.
Have you tried compairing the uahv6 graph with v5 theres no difference which makes your claim of 0.57 June 1998 suspect as well , you havent even provided a data source.
“ you havent even provided a data source.”
Data source is in the article:
http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
and UAH5 and UAH6 are very different – as was discussed when version 6 was first released. For one thing it decreased the rate of warming from 0.14° to 0.114°C / decade.
Ok so they changed to uahv6 2015 , and never changed the data and graphs in the archive , and no disclaimer on the archive material . That’s misleading , I do stand corrected ,
You’re looking at the old data
Translation: The old data didn’t show what we wanted to see, so we cooked it some more.
MarkW
“Translation: The old data didn’t show what we wanted to see, so we cooked it some more.”
So you’re accusing Dr Roy Spencer of committing fraud to get the results he wanted? I’m not a fan of Spencer, but I consider that an outrageous accusation.
No… the graph shows 2010 (not 2009) ties at +0.43.
Anyway… the point remains valid: June 2020 is NOT the hottest June evah.
When can we expect the headline retractions?
No it does not heres the data from Roy’s archive
June 2010 UAH Global Temperature Update: +0.44 deg. CThursday, July 1st, 2010
What cant you understand about the above ?
YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
2010 1 0.648 0.860 0.436 0.681
2010 2 0.603 0.720 0.486 0.791
2010 3 0.653 0.850 0.455 0.726
2010 4 0.501 0.799 0.203 0.633
2010 5 0.534 0.775 0.292 0.708
2010 6 0.436 0.552 0.321 0.475
Read the data 2009 6month 0.43
“When can we expect the headline retractions?”
What headlines? Which data set where they prediction would be the hottest evah? Why can’t they spell?
B.d,: I stand corrected. Didn’t have the data chart. Graph looked like 2010.
That’s ok, thanks.
Wait a minute: Chart data 2009 June shows 0.043 not 0.43. Back to you.
2009 6 0.043 -0.017 0.103 0.110
Not on the graph it doesn’t, I dont know why they moved a decimal point over on the writen data .
It does not show 0.043 its 0.43 each date eg 1 0 ,2 0 ect has a 0 after the month number
So June reads 6 0. 043
B.d.: The graph and written data agree. Look to the right of 2009 and 2010 graph lines…. about 50% over for June. 2009 is close to zero and 2010 is a bit less than halfway to 1. So chart decimal point is OK.
Roy taught me how to chart graphs back in college. Before xls existed lol.
Well it’s not see my previous post
No… look closely at the graph. Halfway past the 2009 line there is a blue circle BELOW 0.1. That is the June 0.043° value. Many of the other 2009 data points are also quite low… in the 0.1 – 0.3 range. June is the coolest point.
YR MON GLOBE NH SH TROPICS
2009 1 0.251 0.472 0.030 -0.068
2009 2 0.247 0.564 -0.071 -0.045
2009 3 0.191 0.324 0.058 -0.159
2009 4 0.162 0.316 0.008 0.012
2009 5 0.140 0.161 0.119 -0.059
2009 6 0.043 -0.017 0.103 0.110
2009 7 0.429 0.189 0.668 0.506
2009 8 0.242 0.235 0.248 0.406
2009 9 0.505 0.597 0.413 0.594
2009 10 0.362 0.332 0.393 0.383
2009 11 0.498 0.453 0.543 0.479
2009 12 0.284 0.358 0.211 0.506
Strokes – The planet has been warming since the 1690s, so there will be many hottest ‘evah months until the warming trend ends.
Record high months exist because all global average temperature compilations were DURING a warming trend.
Based on ice core proxies, the warming trend will end and a cooling trend will begin someday.
Perhaps when the Holocene interglacial ends.
Then climate alarmists like you can warn of the coming global cooling crisis.
Earth’s climate is wonderful, and has been getting better for 325+ years — why don’t you find a real crisis to write about?
”Actually, at 0.43°C it is almost the hottest June in the UAH record”
Because it’s still on the way down from a 2019 high.
lol
It’s been warming for over 150 years coming out of the LIA. Tide gauge trends indicate a slow steady *climate level trend* (leaving out the +/- 0.35° “noise” that Alarmists use to create panic).
Having frequent new high readings IS THE EXPECTED result while long term trends continue. If you call out new highs during a trend, you look stupid. Then Alarmists propagandists purposefully confuse these new high readings with “all time” (meaning the last 100 years) local high temperatures which are more significant…at least to the locals.
Speaking of long term trends…statistically, you don’t get to pick some point along the “trending” period and then assign a new cause to it…at least not without identifying the earlier cause (before ~ ~1945…which has never been explained) and then showing how the original (still unidentified) cause ceased…and the new cause emerged AT A SINGLE POINT IN TIME. That’s especially difficult when all the tide gauges in the world indicate that the Climate level “BEFORE” and “AFTER” trends are exactly the same.
The Null Hypothesis:
“The Modern Warming is the continuation of a longer term trend”.
THAT IS WHERE THE SCIENTIFIC METHOD DEMANDS the arguments start (or thereabouts). *Amazingly, that first critical step in the normal application of the scientific method has yet to be done. That’s because this isn’t a good and credible scientific investigation. It’s Political Advocacy using corrupted science.
Every Institution Leftists control has been corrupted. And that’s most of our institutions.
Out of interest, who has predicted June will be the hottest “evah”?
I’m just shocked they can’t generate data that maintains a trend line any better than that.
“I’m just shocked they can’t generate data”
They? Roy?
Not Roy. The UAH satellite. Roy just reports the results.
Don’t trust Roy? Then try appointing Algore ha ha.
“Roy just reports the results.”
I’m pretty sure he’s also responsible for all the calculations as well.
Come on, it’s a climate emergency. Didn’t you get the memo? /sarc
And now for the good news. The world is on track for global warming of just 1.4 C per century, meeting the IPCC’s target, so cancel the next boondoggle, and all the IPCC members and their “experts” can pack their bags and take a slow boat home.
“The world is on track for global warming of just 1.4 C per century, meeting the IPCC’s target”
I think the target was 2C… so beating instead of meeting me thinks… 🙂
We all know the response of the IPCC to insufficient warming:
Global population will peak in 40 years, then a steep decline. At 0.14 degrees C per decade, there is no problem.
mario lento
“I think the target was 2C… so beating instead of meeting me thinks…”
_____________________________________
I believe you may be referring to the 2007 IPCC projection, which stated:
“For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.” https://archive.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html
According to UAH, the rate of warming since 2007 is presently 0.3°C per decade: https://woodfortrees.org/plot/uah6/from:2007/plot/uah6/from:2007/trend
Warming will actually have to slow down over the next few years for the IPCC 2007 projection to be right!
Yes: They often write “…far overshooting a global target of limiting the increase to 2C (3.6F) or less, the U.N. World Meteorological…”
If the Greenies deny the life-giving forces of sunshine and carbon dioxide, why do they not extend their fears and condemnations to, say, seawater and ethyl alcohol: both of these, though products of nature, can kill one way or another?
Perhaps because their dangers cannot be analysed and timed by crude and usually misleading computer programs, as used to predict weather forecasts, whose unreliability is notorious.
The Greens’ confidence in computer programs and their interpretation and politicisation and moneymaking potential explains their usually erroneous predictions and unwillingness to accept alternative opinion.
Blizzard in China, https://youtu.be/0SAR0abWr1Y
June 29
400 odd sheep dead just been sheared for the summer.
For the last 2 months China has had large streams of moisture moving across their nation. The flow is coming from this surface wind pattern, … https://earth.nullschool.net/#current/wind/surface/level/overlay=total_cloud_water/orthographic=-273.49,8.73,672/loc=66.365,9.187
The moisture stream then continues on into Canada/Alaska. In the several months prior to May the same stream carried persistent storm tracks into the Pacific Northwest states. It was an unusually wet spring around here. The blackberry bushes are filled with green berries that should bear abundant fruit into the early fall.
I don’t need to see any data to know that I’m freezing my ass off in Argentina.
Put some clothes on.
There’s no such a thing as bad weather, just bad clothing. (Norwegian saying, apparently).
Pretty cold in Perth Australia too, even with clothes on. But it is winter.
German too 😀
so do I in CVhile
Hola, hermano. I spent two years living in and near Santiago many years ago.. I loved every minute of living in Chile.
Here’s another perspective:
I guess you could also make a similar looking chart of, say, human body temperature, stretching from lethally cold on the left to lethally hot on the right, yet make it look innocuous by stretching the y-axis sufficiently, as you have done here.
Yes, I could most certainly do that, if I were inclined to equate the human body to the Earth/Atmosphere system, loosing all sense of context, neglecting the vast differences between bodily physiological processes and terrestrial physical processes, but, of course, that’s not what I am doing. (^_^)
I could also do it with plutonium exposure and not have to bother with the y axis at all, which, of course, would mean that I was now equating plutonium exposure to, say, carbon dioxide exposure, again loosing all sense of context and the specific physical laws applying specifically to those specific contexts.
Within the context of temperature change in the Earth/atmosphere system, where human life is concerned, even a temperature anomaly that varies within one degree over decades is still extraordinarily stable, and that’s what I was getting at.
On verge of a La Nina.
Surface measurement temps falling wildly.
Does the Version 6.0 global average lower tropospheric temperature (LT) anomaly have a lag that Roy is not mentioning.?
Are the satellite drifts needing updating??
Bring on V7
Verge of La Nina? I noticed 2 days ago that the ENSO meter here had suddenly increased sharply. No idea what that implies but a bit surprised no one remarked on it . But of course the last few months have been rather distracting .
Keep in mind that satellite data usually lags 3-4 month behind when it comes to ENSO effects. June is affected by the ENSO conditions around March. We were still under the influence of El Nino at that time. We have to wait until August before we see post El Nino data.
But, even then the effects of the El Nino often hang around. It took over a year for the 2016 El Nino effects to disappear. Now, if a La Nina does show up later this year that would accelerate the process
Meanwhile in the desert we about froze last winter, many things did poorly in the greenhouse. Had a cool spring and summer has been cooler than average most of the time. We may not even get over 112 deg this summer.
The Three Gorges dam is being threatened by the heavy continuous rains in southern China, … https://www.taiwannews.com.tw/en/news/3955518
Although this is a discussion about Covid 19 there is a salient point made at around 5 minutes into the discussion about the validity of models, modification of models to match observed data and their relative use or not for hindcasting.
If these temps are to be considered measurements there should be uncertainty associated with them. I was unable to find any info concerning uncertainty at UAH. If there is no uncertainty, then they cannot be considered to be absolute measurements. Even anomaly values would be questionable when compared to others. UAH could only be used by itself and not in conjunction with any other temperature database.
From looking at the global SST map, it seems very likely the Atlantic is entering its 30-year cool cycle , and a La Niña cycle is developing. Moreover, about 1/3rd of the South Indian, South Pacific and Southern Oceans are cooler than normal.
Therefore, global temps will likely be falling for the next 2 years from the La Niña cycle, and if the AMO does enter its 30-year cool cycle, we could have 30 years of global cooling.
https://www.ospo.noaa.gov/Products/ocean/sst/anomaly/
From June of this year, NOAA nefariously added a +-0.2C gray scale to the global SST map to “hide the decline“, and when I e-mailed NOAA asking them why they started this, they said they’ve always done this (a lie), and that “basically +-0.2 is the same as 0. so there is no reason to differentiate.” …. Oh, really? Try that “logic” with the IRS and see what happens…
“HO-HO HEY-HEY DEFUND THE NOAA!!!!”
if the AMO does enter its 30-year cool cycle, we could have 30 years of global cooling.
The AMO was in its cool cycle between 1967 & 1997. I’m not sure there was much global cooling during that period.
https://en.wikipedia.org/wiki/Atlantic_multidecadal_oscillation#/media/File:Atlantic_Multidecadal_Oscillation.svg
Do you just grab at any passing straw in the hope one might explain 50 years of warming. The net effect of Ocean oscillations is zero. Air temperatures respond temporarily to these events. They have no long term influence as they don’t add or reduce earth’s heat energy.