Anyone who follows the climate debate closely, would have heard the recent announcement that July 3 and 4 were allegedly the ‘hottest days ever’. This claim, it seems, is as solid as quicksand, built on a shaky foundation of guesswork and political objectives.

The Untrustworthy Tapestry of Temperature Measurement
As Steve Milloy astutely highlights in his recent Wall Street Journal article, “Hottest Days Ever? Don’t Believe It,” the recorded average global temperature of 62.6 degrees Fahrenheit for these dates was derived from the University of Maine’s Climate Reanalyzer. As Milloy notes, this system “relies on a mix of satellite temperature data and computer-model guesstimation to calculate estimates of temperature.”
In the realm of scientific research, ‘guesstimation’ is hardly a term that inspires confidence. As Milloy succinctly puts it,
“there are no satellite data from 125,000 years ago. Calculated estimates of current temperatures can’t be fairly compared with guesses of global temperature from thousands of years ago.”
https://www.wsj.com/articles/hottest-days-ever-dont-believe-it-global-temperature-north-sole-poles-6e64a991?mod=opinion_major_pos4#comments_sector
The Misleading Mirage of ‘Average Global Temperature’
Furthermore, Milloy debunks the fallacy of ‘average global temperature’. He states,
“Average global temperature is a concept invented by and for the global-warming hypothesis. It is more a political concept than a scientific one.”
https://www.wsj.com/articles/hottest-days-ever-dont-believe-it-global-temperature-north-sole-poles-6e64a991?mod=opinion_major_pos4#comments_sector
Seasonal changes and regional disparities, as Milloy points out, severely compromise the concept of a singular ‘global temperature.’ For instance, temperatures rise during the Northern Hemisphere’s summer due to increased land exposure to sunlight, a fact conveniently overlooked by the climate alarmist contingent.
The Blind Spots and Blur in Climate Data
The issue of precision (or rather, the lack thereof) in temperature data is another issue Milloy brings to the fore. An alarming 96% of U.S. temperature stations reportedly produce corrupted data, and about 92% of them have a margin of error of nearly 2 degrees Fahrenheit. So, next time you hear about a 1-degree rise in global temperatures, remember that the margin of error itself exceeds the supposed increase!
Our friends at the National Oceanic and Atmospheric Administration present global temperatures starting from 1880, but regular temperature collection in remote regions like the north and south poles came much later. Can we really make meaningful and accurate comparisons when we’ve been effectively blindfolded for a significant portion of our observational timeline?
The Dilemma of Characterizing Earth’s Warming
As Milloy so eloquently sums up,
“It isn’t plausible to characterize Earth’s warming in a single average number, especially when we don’t really know what that number is today, much less from 125,000 years ago.”
https://www.wsj.com/articles/hottest-days-ever-dont-believe-it-global-temperature-north-sole-poles-6e64a991?mod=opinion_major_pos4#comments_sector
This, ladies and gentlemen, hits the nail right on the head.
So, the next time you hear the phrase ‘hottest day ever,’ remember to take it with a pinch of salt. Perhaps even a bucketful. After all, a touch of skepticism in the face of potentially misleading data can go a long way in keeping us grounded to the realities of our complex and ever-changing climate.
What’s your take on the ‘hottest days ever’ and the average global temperature controversy? I’d love to hear your thoughts in the comments below.
What’s really crazy is that people just believe this stuff without questioning it. They assume that NOAA and the WMO are completely unbiased and only want to ‘save the world.’
When NOAA was publishing that one year was the warmest and I saw that this was based on 0.01 C more with an error of + or – 0.05 C they lost all credibility with me
the error is more likely around the +/- 1 or 2 degrees C
The mathematical uncertainty might be +/- 1-2° C but the actual error range is likely far higher.
Yeah. Just think of a number and double it.
It certainly is not some 0.05C
That is a mathematical and measurement absurdity.
It’s less than the uncertainty on individual instruments.
It doesn’t cover problems with instrument siting.
It doesn’t cover changes in the micro and macro siting issues.
It doesn’t cover the pathetically inadequate number of sensors.
The uncertainty in terms of 95% confidence range gets much less when lots of instruments are used. Most of these instruments with weather station siting issues did not have a change of siting issues, especially from the 2016 El Nino global temperature spike onward.
Not true. The only way to assert this is to say each station is a sample so that you can create a sample distribution. But then you damn yourself to finding the variance of a sample means distribution that includes both summer and winter temps due to NH and SH differences. If you try to claim one sample that includes all stations in order to increase the sample size, then you have no sample means distribution. At best, you could say that the variance of the single sample is the SEM, but then you are back to the hemisphere problem of a very big variance.
In any event, anomalies are calculated at each station before averaging those to a single global value. The problem with that is losing the ability to quote an actual global absolutely temperature. Plus the fact that the “error” is calculated from the variance of the anomalies themselves rather than carrying the variance from the subtraction of the absolute monthly average temperature and a baseline temperature. Those must be considered as random variables and their variances added.
I have been working on this for various locations using the method outlined in TN 1900 Example 2. You would be surprised at the large uncertainty intervals you get when following those methods for experimental uncertainty.
Specious sophistry.
Where do people get these ideas. You first have to assume that a Students T distribution is ok to use. A lot of sites seem to recommend this with small experimental measurements. Then you have to determine Degrees of Freedom and read that into a T table to get the correct multiplier. You will never reduce the SEM when doing this.
97
Talking about the climate models, Nick?
Is that the ECS today?
Why not? You do.
How about we start with your IQ, Nick – that should give us a nice low number to work with?
I’m no fan of Stokes, and he rarely replies to me even when I ask him a direct question. However, I think that insults such as about his IQ are inappropriate, contribute nothing to the discussion, and are almost certainly wrong. The problem lies elsewhere.
Sampling error reduces with increasing number of observations.
We didn’t have satellites 125,000 years ago, but we have ice cores, tree rings, and geological samples. Past climate history is far better understood than most commentators here realise.
For those who are genuinely interested in the derivation of an ‘average’ temperature, there is an explanation here:
https://www.realclimate.org/index.php/archives/2023/07/back-to-basics/
You will not get anything real from “realclimate”
It is the foremost climate propaganda site.
Gavin is a prime scammer, with his meaningless fabricated GISS surface data, created and fudged from surface sites that, in the majority, are totally unfit for studying global climate.
Why not cite a site that has some actual science?
RealClimate has always been a propaganda site.
Past climate history is far better understood at WUWT than just about anywhere else.
Go back to 2007 and start reading. Comment again when you get up-to-date.
“Past climate history is far better understood at WUWT than just about anywhere else.”
Yes, it is, and climate history is not kind to human-caused global warming/climate change advocates.
Climate history shows today’s warming is not unprecedented because it was warmer in the past than it is now.
Climate Alarmists want us to think we are living in the “hottest year/day Evah!”
We can’t grow barley in Greenland today. It could be grown in Greenland in the past. And then there are the tree stumps revealed as glaciers melt today, showing it was warmer in the past than it is now.
Here in the middle of Dust Bowl country in the U.S., we are having a fairly mild summer. Nothing like the hottest ever. Our hottest evah around here was in 1936 when it was about 120F at one point.
“Sampling error reduces with increasing number of observations” if and only if the “observations” are of the same thing, and if the observation errors are normally distributed. GAT fails on both counts.
The observations are indeed of the same thing – a temperature measurement is the same thing as a temperature measurement. Systematic biases are identified and removed from SAT products, though of course the possibility exists for unknown biases (but assuming the existence of such biases based on no evidence whatsoever is not scientific).
Prevarications up, down, and sideways.
You are joking , right? Tmax and Tmin are the same thing?
Read NIST TN 1900. NIST didn’t even try to call Tavg the same experiment under repeated conditions.
Read the GUM and the TN closely. “Experimental” uncertainty requires measurements be taken under repeatable conditions. Something like ten experiments of a chemical reaction and measuring the products. Every experiment will have a different value. The best that can be said is the mean with an accurate uncertainty interval. This will allow replicators to have a good idea where their values may lay.
This is what NIST did in TN 1900, they found the experimental mean value of Tmax and the resulting uncertainty interval for a months data.
ANYTIME you calculate a mean (average), you are doing it from a distribution of values. That distribution WILL have a variance. When was the last time you saw a variance associated with a mean.
The June anomaly was calculated from an average of monthly absolutely temps. Tell what the variance of those average temperatures were.
You’ve spent way too much time clouded in your own fog. Bob and Tom are not the same thing, but I can calculate their average height. Bob, Tom, Judy, and Denise are not the same thing, but they are a sample of the human population, and I can take their average as an estimate of the average height of a human. I can do this even if I have to use different measuring tapes for each of them. If I can make my sample size much larger, I will have a better estimate of the average height of a human. If I measured every human on earth, I would have a perfect estimate because my sample mean would equal the population mean. This would be the case even if I used a different tape measure for each and every person. The confounding factors would be random measurement error and systematic bias. If I remove systematic bias I will be left with random error, which will tend to cancel.
If we identify some source of systematic bias we will have to revise our estimate, but it is not scientific to reject the estimate on the grounds that some hypothetical unknown systematic bias might be discovered at some future time.
“””””You’ve spent way too much time clouded in your own fog. Bob and Tom are not the same thing, but I can calculate their average height. Bob, Tom, Judy, and Denise are not the same thing, but they are a sample of the human population, and I can take their average as an estimate of the average height of a human.”””””
Yes, you calculate an average. However you seem to ignore the variance as an important statistical parameters.
Are you going to make clothes based the average? What if Bob is 7’2″ and Denise is 5′? Will the clothes fit well? Maybe if you had known enough to look at the variance, you might have made two different sizes, or maybe four sizes.
Read this to see the problems with only dealing with averages and ignoring variance.
“””””When U.S. air force discovered the flaw of averages”””””
“””””In the early 1950s, a young lieutenant realized the fatal flaw in the cockpit design of U.S. air force jets. Todd Rose explains in an excerpt from his book, The End of Average.”””””
If you are making clothes you probably want them to fit as many people as possible, so maybe you make your “Large” size fit the average male, your “Medium” to fit 1 standard deviation, and so forth.
But that isn’t the kind of question we are using the global mean temperature to determine. We aren’t trying to predict a temperature for a given location by using the mean, we are trying to determine whether the entire distribution of earth temperatures is shifting. For that, the mean is the perfect statistic to track, and the more observations we can use to calculate our mean the better.
Obviously there are other important questions surrounding this central one, such as “in which place(s) is/are the temperatures shifting the most?” And we’ll need further analysis to explore those. But the central question of, “is a big change happening to earth’s climate?” Is addressed very well by using the mean.
I see I forgot the link.
https://www.thestar.com/news/insight/2016/01/16/when-us-air-force-discovered-the-flaw-of-averages.html
A) typical alarmist resorting to ad hominems.
B) You cite a typical average that is utterly useless as most averages are.
C) Try averaging Bob and Jane! What is the result of that average, an unbalanced hermaphrodite?
An average that misinforms everyone involved just as global averages imperfectly accomplish!
Measuring a batch of air in the US and measuring a batch of air in Scotland, is not measuring the same thing.
To measure the same thing you need to measure the same batch of air using the same equipment.
Claiming that two batches of air, miles apart, using completely different instruments will improve the accuracy of both measurements is nonsense on stilts.
Politically useful nonsense, but still nonsense.
Would these folks fly in a plane or skydive if design measurement were taken this way? Would you trust parachutes whose average opening rate was trumpeted without knowing the variance? Would you trust a powerful medicine whose weight was determined by averaging a large quantity of pills without knowing the variance? Think fentanyl.
No one is claiming such a thing.
Just how do you think a Global AVERAGE Temperature is calculated.
Then tell us how a Global Average Variance is calculated.
“No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”
― Heraclitus
This summarizes well the issue of measuring something, and the distinction between stationarity and non-stationarity. If one wants to measure the diameter of a ball bearing (with negligible ellipticity) precisely, a high-quality micrometer and a temperature-controlled environment will suffice. The precision can be increased in proportion to the square-root of the number of measurements because the ball bearing has a unique diameter and the only variations encountered will be random and self-cancelling.
On the other hand, if one is measuring the temperature of the river of air passing a weather station, what one is doing is sampling air parcels that have no fixed temperature. The parcels tend to increase in temperature during the day, and decrease at night, but also have unpredictable increases and decreases. That is, there are daily, seasonal, and annual trends where not only the temperature changes, but also the mean and standard deviation change over time. A time-series of recorded temperatures is a classic example of non-stationarity. One is not justified in claiming increased measurement precision simply by averaging many readings. What one obtains with many measurements is a distribution, with statistical parameters that vary with the starting and ending point of the sampling period. Notably, the standard deviation will increase if the time-series has a positive trend. The precision of the recorded temperatures is the inherent precision of the temperature-measuring device because there is only one opportunity to measure a particular parcel of air with the same instrument. All subsequent measurements are different parcels of air. One can calculate the statistical parameters of a large number of samples, but presenting results with greater precision than the measuring device provides meaningless results. Indeed, the longer the time period, the less meaningful high-precision summary statistics become. What one obtains is the distribution of values over time, not a unique value.
If one has a large basket of fruit, and samples are taken to obtain an average weight, the best that one can legitimately claim is what the average is of all the samples of fruit. But, one cannot say anything meaningful about the weight of an unidentified fruit included in the sampling. Nor can one obtain any useful information by taking more samples because of the large number of different kinds of fruit. What is the point of claiming a high precision of things that vary in size between a watermelon and a kiwi?
Consider the case of measuring the height of a human. A useful approximation is obtained easily. However, an attempt at high precision is thwarted by the fact that one’s height will change through the course of the day, with how strongly the hair on their head is compressed, what their posture is, and whether an instantaneous measurement is during the systolic or diastolic phase of their blood pressure. That is, the boundary for the extent of the human body becomes fuzzy with smaller increments of length. The problem is further exacerbated in trying to obtain an average for a population of humans of different ages, ethnicities, and hairstyles. Trying to assign unwarranted precision to measurements is a fools errand.
Again, you are lost in the haze of your self-imposed confusion. This bizarre cult you’ve built around denying that things can be measured is bewildering to behold.
In your fruit example, you acknowledge that we can estimate the average size of the fruits in the basket with the sample mean. You also will concede (I dearly hope) that if we measure the size of more pieces of fruit, this quantity will be better known to us. Because in fact as the number of pieces of fruit measured grows it approaches the actual number of pieces of fruit in the basket, and if we measured every piece of fruit in the basket we would know precisely what their average size is. Therefore we can use our estimate of the mean to make meaningful observations such as whether the average size of fruit in the basket is changing (and we can then explore questions like whether some types of fruit are becoming more prevalent).
In regards to measuring air temperature, the thing we are attempting to quantify is not the temperature of a parcel of air that passes by the weather station as it moves through the atmosphere, but the mean temperature of the typical parcels of air around that weather station. More observations will certainly enable us to quantify this value better, and I think you will agree that we can quite precisely quantify a difference in these mean temperatures between, say, a weather station in Antarctica and a weather station in Arizona, US. Your (il)logic would force us into a situation in which we cannot claim to be able to distinguish between the climatology of these regions.
If the, as you say it, “distribution of [temperature] values” at a location shifts over many years, we properly define this as a change in the climatology of that location.
You and the other measurement deniers will forever have your position confounded by the fact that we meaningfully measure the things you claim we cannot, and we use these measurements for useful things.
So you didn’t grasp anything Clyde wrote, not much of a surprise.
I understood it quite well. That is why I’m able to dismantle his argument so easily. You should reread my comment above until you comprehend what I’m saying, or ask clarifying questions if it’s too much for you.
Only in the echoes of your fallacious and empty logic.
Not once have you mentioned measurement uncertainty or the variance of distributions for which, you calculate a mean. Hate to tell you but every time you say mean, you must also quote the variance or SD in order to properly characterize the distribution you are declaring.
Your whole description did not mention any of this. Worse, your description has no mention of calculating these. You can mention μ or x_bar, but without telling what σ² or u(x_bar)² you are not being scientific.
Only if you are measuring the same temps in the same spot with multiple instruments
Fail upon fail
If I want to measure the average height of people in a room, I can get a good estimate by measuring the heights of a bunch of the people in the room. I don’t need to measure the same person bunches of times (of course that might help, depending on how good I am at taking the measurements).
I see you’ve also swallowed the koolaid. Best of luck getting out of the cult of measurement denial.
The people don’t change, but air does.
Clown.
AJ doesn’t deserve the capitalization. small clown suffices.
That has no bearing on the instantaneous readings taken at numerous weather stations all around the world (which is analogous to measuring the heights of a bunch of people in the room).
🤡
I suppose every measurement is 100% accurate, i.e., ±0.0.
Almost definitely not, but the random measurement error will tend to cancel. Especially as your sample size grows. You need to be wary of systematic error, of course (say, your tape measure is a half inch off on every measurement), but systematic bias can be identified and addressed.
Sorry dude, measurement uncertainty doesn’t cancel. At best, the uncertainty may add in quadrature. You have to analyze the functional relationships to know.
Here is the big problem. With multiple measurements of the same measurand with the same device one can justify that the distribution surrounding the mean is modeled by: (μ ± ε). If ε is distributed normally you can argue for cancelation where μ is the true value.
When you have single temperature measurements, there is no distribution of multiple measurements surrounding each that you can say cancels. Additionally, if temperature readings differ by more that the uncertainty interval, no cancelation is possible. For example, 70 ±0.5 is averaged with 71 ±0.5 the intervals don’t overlap so no direct cancelation is possible and uncertainties simply add to get, 71 ±1.0.
You’ve changed the wording of my comment and are attempting to address a straw man argument. The uncertainty doesn’t cancel (that isn’t a syntactically meaningful statement), the errors tend to cancel, and this reduces the uncertainty. If the error is random, it is equally likely to be positive or negative. If I combine a very large number of measurements, there will be about as many negative errors as positive ones, in about the same proportion. Those errors are therefore not skewing my estimate. This is true even if the measurements are made with different instruments with different precision, provided the errors are not systematic.
It is incorrect to say that the errors do not cancel between two measurements of different magnitudes. If I take a measure of 71 degrees and it is off by +0.5 degrees (the true value you should be 70.5), then take a measurement of 70 degrees and it is off by -0.5 degrees (the true value should be 70.5), and I average those measurements together, my result will be 70.5, the true value. My estimate is not in error. The errors have canceled.
If I make a measurement of 75 that is of by +4.5 degrees, and a second measurement of 68 that is off by -2.5 degrees, my estimate will be 71.5, with an error of 1 degree. That is still a better estimate than either of the individual measurements.
You seriously need to read the “Guide to the expression of uncertainty in measurement”, JCGM 100:208 and related documents.
Uncertainty adds and does not cancel directly. The method of adding uncertainty is similar to adding variances.
“True value” is no lnger recognized and there is a whole section devoted to determining the measurand. The Document says, “The term true value (B.2.3 ) has traditionally been used in publications on uncertainty but not in this Guide for the reasons presented in this annex.”
One of the reasons for not using “true value” is because each measurement of the same thing has an uncertainty. One can never expect complete cancelation due to this uncertainty.
I’m not talking about the uncertainty, I am talking about the error. The random errors cancel, thus the uncertainty of the mean is smaller than an individual measurement. The more measurements you include in your estimate of the mean, the smaller the uncertainty becomes. As 1/sqrt(n).
Walk through my examples above again, slowly, and try to reflect on the implications logically.
More precisely your “1/√n”, is applied to the population Standard Deviation and the SEM with the equation:
SEM = σ / √n
Algebra let’s one use the SEM, since the population statistical parameters are not known, to estimate the population Standard Deviation as follows:
σ = SEM • √n
The SEM also defines the interval within which the estimated sample means lays. It is basically the uncertainty in the estimated mean.
“”””””The more measurements you include in your estimate of the mean, the smaller the uncertainty becomes. As 1/sqrt(n)”””””
Again you mention more measurements as being “n”. It is not more measurements, it is increasing the size of the samples you obtain from the measurements you have.
Your comment leads one to believe that you are using one large sample with all the measurements in it.
This might reduce the standard deviation of the sample mean, but it would also increase the population Standard Deviation of the population of temperatures by using the the above equation. “σ = SEM • √n”.
It would be interesting to see how far this would exceed the temperature range of the globe from poles to equator.
Using different equipment, often different types of equipment, located at wildly different environments and still often badly sited, temperature stations with things living inside the station housings, etc. etc.
All make your overly simplified “temperature measurement” a quagmire of unique datums.
Nor should one ignore that NOAA, MetO, BOM and other government agencies frequently adjust both historical and current temperature measurements destroying their value for weather or climate.
Sampling error decreases with the number of elements in each sample taken from a population, not the number of samples taken.
In a monthly average of Tavg you basically have 30 samples of size n = 2.
And by the number of samples taken. If I have a room of 20 adults and I want to know the mean height, and I decide to average together the average height of the men and the average height of the women to make my estimate, increasing the number of elements in the male sample will decrease the error, but so will increasing the number people I am drawing for my total sample (say, 10 people versus 6).
There seems to be a micro-cult on this forum around denying this rather basic statistical concept, and it is fascinating to observe.
Show a reference for your assertion that the number of samples decreases the SEM.
The variance of the sample means distribution is dependent on the size of the samples. The number of samples only determines the accuracy of the SEM at the margin due to a larger number of sample means in the sample means distribution.
You all need to get your assumptions straight.
Just recently on another thread, we dealt with the idea that with ~9000 stations there was only one sample with n = ~9000. You seem to be implying there are 9000 samples of size n = ?.
Please provide a definition of the SEM.
As I just finished telling one of your fellow pseudoscientists, go read the GUM.
I am not asking because I don’t know the definition of the SEM, I am asking because I question whether Jim does.
Ok, dude, I’ve given you two sites, now it your turn.
Give us two sites that show the SEM is NOT the standard deviation of the sample means distribution!
The standard deviation of the sample means distribution is equal to the population standard deviation divided by the square root of the sample size. You really have a pair of blinders on.
https://en.wikipedia.org/wiki/Standard_error
Backing up are we?
“””””And by the number of samples taken. “””””
“””””However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). “””””
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4452664/
Use this site to verify.
https://onlinestatbook.com/stat_sim/sampling_dist/index.html
Please see equation 3 in your first link. Thank you for providing a succinct reference.
You didn’t read the document did you?
“””””Therefore, the SEM is estimated using the SD and a sample size (Estimated SEM). “””””
“””””n: sample size”””””
“””””When the samples of the same sample size are repeatedly and randomly taken from the same population, they are different each other because of sampling variation as well as sample means (Fig. 2, Level B). The distribution of different sample means, which is achieved via repeated sampling processes, is referred to as the sampling distribution and it takes a normal distribution pattern (Fig. 2, Level C) [1,6,7]. Therefore, the SD of the sampling distribution can be computed; this value is referred to as the SEM [1,6,7]. “””””””
Read that last paragraph very, very carefully!
1) repeated samples from the same population
2) samples of the same size (not the # of samples, rather the SIZE)
3) distribution of the means of of repeated samples
4) the sampling distribution
5) the SD of the sampling distribution is the SEM
Please note that the equation relating these is
SEM = σ / √n
Use the simulation link. Multiply the sample SD (SEM) by the √ of the sample size (n)
you chose and you will see the result is very, very close to the population SD.
Don’t forget to post a link showing how the number of samples reduce the SEM in anything more than a small, small way.
The sample size is the number of observations included in the sample. Your point 2 is nonsensical, since the # of samples is the sample size. Your link show that increasing this number (n) proportionately decreases the SEM. The part of the article you’ve quoted is saying that if you randomly draw the same number of observations from the population for your sample, they will be slightly different each time. That is, you’d get a sample with a slightly different mean each time. If you took the distribution of these sample means you would find they approach normal.
I don’t need to provide a link to dismantle your nonsense since you’ve done it for me. You should try reading and rereading that document until you actually understand what it is saying.
You don’t appear to use precise language.
“””””The sample size is the number of observations included in the sample. Your point 2 is nonsensical, since the # of samples is the sample size. “””””
The first sentence is correct. The second is not. “”””the # of samples is the sample size””””” is not correct.
The number of individual samples of size “n” is the number of what you call “sample size”.
If you would use the simulation it will become obvious the “size” of sample can be varied chosen as 2, 5, 10, 16, 20, 25. The number of samples can be chosen as 5, 10000, 100000.
The size of the sample is all that affect the SEM. The number of samples has little effect.
“n” is the sample size – it is the number of observations drawn from the total population. I agree that the size of the sample determines the SEM, in fact that is my entire point. I’m glad we’ve found agreement there.
You still haven’t provided a reference for your assertion that the number of samples decreases the SEM.
Actually it doesn’t, but that lie has been convenient to the climate scammers.
As to understanding of past climates, it has been the climate scammers who have shown dramatic disconnects with the known data.
According to a new report, global warming will soon make the American South unlivable.
https://www.foxnews.com/media/nbc-meteorologist-cites-controversial-un-study-claiming-extreme-heat-make-us-south-unlivable-for-humans
Nobody seems to have told all the people who live in Central America, SouthEast Asia and anywhere near the equator that they are already dead.
Story Tip
I sent that tip the other way, through the submit story link a couple hours ago!
The hype is so over the top now. There is another story out there about flooding in India, as if the concept of the Monsoon is alien to the “experts”.
sorry people been questioning it for years. not very insightfully,
but finally Judith Curry decided to do it herself.
you can try to improve on that effort
You know that the surface fabrications are totally unfit for anything but climate propaganda, don’t you Mosh..
And that the warming from urban development and expansion, and airport exhausts etc could be anything up to + several degrees Celsius.
And that the mathematical law of large samples is not applicable to a sparse, constantly changing mess of tainted data !
The recently reported world record high temperatures are according to weather models that take in temperature measurement from weather stations, satellites,airplanes, ships and radiosondes aboard weather balloons. One source reporting a new record high reports such a determination by the ECMWF model, apparently record high since the start of 1979.
The truth may be too much for many to bear. It’s much easier to believe a comfortable lie, one that requires you that the people in charge know what they are doing and that they are dying what is right.
If that does not hold, then the entire edifice crumbles. The supposed experts are, at best incorrect and unjustifiably confident and at worst deeply malevolent.
People want to believe that someone else will take care of things, so they don’t need to worry about it. Even if the thing is utterly beyond any one’s control.
At the end of the day, the rejection of religion in Western societies has a lot to answer for (not that organised religion doesn’t have significant problems). Peoples lives have become meaningless and they yearn for some sense that their world is in control, so they happily cede control to people who promise to “save” them.
Most of the Holocene has been warmer than now. Earth is still recovering from the Little Ice Age Cool Period. Except for the brief meltwater pulse driven cold snap 8200 years ago, the long Holocene Climate Optimum, roughly 9500 to 5200 years ago, was warmer than now.
The early Holocene, 11.7 to 9.5 Ka, was warming up as ice sheets melted. The Egyptian, Minoan, Roman and Medieval Warm Periods were also balmier than now, as shown by global proxies. Only the five cool periods between the HCO and these WPs were more frigid than now.
That gives at most some 5000 years of the 11,700-year Holocene colder than now. Probably less. The cool periods were not happy times, although they drove advancements.
“Most of the Holocene has been warmer than now.”
Just repeating that in bold. 🙂
The slight warming we have been lucky enough to have had, has been from the coldest period in the Holocene
Thank goodness for that warming !
Here is incontrovertible proof of much warmer temperatures 5000yrs ago. This rooted dead white spruce is just outside the Arctic NW coastal Inuit village of Tuktuyaktuk. Today, the same species of living spruce are 100km south at the treeline. One of this size is an additional 100-200 km south of that, where it is 6-8°C warmer (discount this by half because of Arctic Amplification for delta T°C global temp).
M. Mann and other doctrinaire tree-ring grifters know about this simple fact. Indeed, another stump of white spruce of this age can be seen near Cambridge Bay on the coast of Victoria Island an Arctic island. Tuck is west of this, about half way to the Alaskan border. There are lots of these ancient trees in Sweden and Russia. You don’t need to fiddle with questionable tree ring ‘data’ if you really want to know what climate change has happened since the HOC.
.
Boreal forest takes time to adapt. It is currently moving northward at the rate of 40-50 metres a year.
It’ll get back to where it was in a couple millennia at that rate.
Historical Aspects of the Northern Canadian Treeline (ucalgary.ca)
Not relevant.
The point is that the world has seen such temperatures before and life not only survived, it thrived.
So the scan tactics being put out by the climate scammers is quite clearly wrong.
How much warmer must it have been when those trees grew.
There are no trees growing there now, and will not be for a very long time. !
No, mid-Holocene Northern Hemisphere summers were warmer than they are today. Winters were cooler as were the monsoon regions of Africa and Asia. At 1.4C above the pre-industrial mean, we are probably currently warmer than 6000BC so we need to go back 125,000 years to find a climate as warm as today.
Wrong, you are regurgitating AGW mantra BS. !
There is plenty of evidence from all over the world that Holocene temperatures were far warmer than now for most of the last 10,000 years
This guy thinks RealScience is a science site.
Some of these guys have a hard time telling the difference between science and climate change propaganda.
Reading RealScience won’t help matters for them.
It’s believing realscience that destroys all logic..
A meaningless number because the thermometers of the 19th century were not up to the task.
The relevant papers are Osman et. al. (2021) https://www.nature.com/articles/s41586-021-03984-4
and Thompson et. al. (2022) https://www.science.org/doi/10.1126/sciadv.abj6535
Both of these use models. How anyone trusts models is beyond me. Also, everything in both (I can’t access the first one so going off past experience) deals solely with anomalies. Not one mention of a baseline temperature.
Consequently, there is no way to assess whether temperatures varied from warm to warmer or cold to warmer or any other combination. It would seem this is a very important point when comparing to present temperatures.
Griff?
“we are probably currently warmer than 6000BC”
“Probably”, huh? Not very definitive. Which means it’s not very scientific. You are guessing. Guessing is all climate alarmists have.
Not guessing, it’s all belief and biased opinions.
Climate alarmists tend to believe the fantasies and delusions programmed into their faulty models.
Who has to go back that far? How does anyone know what temperature extremes were two or more centuries ago when the planet was less densely populated? Who was keeping records in remote under- and un-populated areas? These proclamations of extreme weather and temperatures are just another example of the scare tactics so beloved by the climate extremists.
LOL! Tony Heller shows this today:
July 10, 1936 105F In Ontario | Real Climate Science
That is a point worth pondering.
We are told that Climate Change®, even a few degrees of additional warmth will wipe out millions of people. You don’t have to hear this from Sam Kinison but “MOVE! GO TO WHERE THE FOOD IS!”. That is, without first world conveniences and agricultural techniques, people don’t settle in places that are inhospitable to human life, they go to where there is a combination of social safety and environmental compatibility – like reliable water supplies, tillable ground, timber, etc.
So places of extreme weather likely are not going to see much human flourishing or people keeping meticulous records in standardized units of thermal measurement.
We do have eye-witness accounts of currently desert like areas that once abounded with vegetation and critters millennia ago. Apparently things have changed since Babylon, Ninevah and Damascus supported massive populations of wealthy people.
Thanks to Steve Milloy.
Two articles appeared in The Irish Times yesterday
Climate change ‘out of control’, UN says, after hottest week on record
The journalist writes:
Minister for Climate Eamon Ryan told the Mary Robinson Climate Conference in Ballina, Co Mayo, yesterday the temperature trends risked “unravelling of the natural systems on which our security depends”.
and
Protesters at Ballina climate conference repeatedly heckle Mary Robinson and Eamon Ryan
He writes not about protests against the alarmism but because these two have not done enough to push alarmist actions:
Oisín Coghlan, chief executive of Friends of the Earth, said he was increasingly scared about climate change and that “being moderate about the issue has not worked”.
Addressing the conference, he added: “Disruptive direct protests and actions clearly have a role to play. Actions need to be increasingly unreasonable in order to save the future for ourselves and our children.”
The journalist is Kevin O’Sullivan, Environment and Science Editor and former editor of The Irish Times.
He has swallowed the climate alarmism, hook line and sinker. Unlike a good investigative journalist or true scientist he has not asked any hard questions of these alarmists and appears ignorant of the work of Clintel who are supportive and contributors to a monthy video conference in Dublin.
The 97% of scientists that rely on government funding for their studies all agree that every year brings proof of global warming, including the highest temperature ever here and there. That fits will with their all agreeing that low clouds currently cause cooling but as temperatures increase the clouds raise higher and poleward those resulting higher clouds cause warming. But at what elevation and degree farther from the equator that the transformation occurs “is difficult to establish” (not to mention the year, decade? century?)
Good for Steve Milloy to throw this idiocy on the Junk Science scrap head, along with other invented data nonsense. CNN International is all in with the fake news of hottest ever, and, unfortunately, a lot of countries transmit CNN as the official news of the USA (and somewhere in their comments they make sure their audience knows it’s Trumps fault).
And WUWT has our very own CNN Simon who dutifully regurgitates all the standard talking points, blaming everything on DJT (who is “going down” realsoonnow), and telling anyone who disputes the standard lines they are stoopid.
“Seasonal changes and regional disparities, as Milloy points out, severely compromise the concept of a singular ‘global temperature.’”
Indeed so. Which is why I thought it was strange when WUWT decided to promote the idea of a real world global temperature
https://wattsupwiththat.com/2023/03/12/new-wuwt-global-temperature-feature-anomaly-vs-real-world-temperature/
with results that are still prominently displayed on the front page.
That is why scientists generally focus on the global average anomaly. But that is getting high as well. June, at 0.95C (base 1961-90) was by a long way the warmest June in the record, ahead of 0.801 in 2020.
And to ‘fix” this terrible problem, instead of just producing more reliable electricity and HVAC systems for ever more poor people every year in every country worldwide, we MUST build useless unreliable “renewable” electricity generators and massively expensive storage capacity that can’t store excess electrons from the high production times to the low or NO production times.
All of this costing massive amounts to transfer the 10% to 20% “off the top” that goes to the super wealthy and their well bought politicians.
And I again ask you Nick, why do you hate the poorest among us, and love the richest (oligarchs, you know, Gates, Buffet, the google and Facebook scum, Amazon cronies, etc.) and most corrupt (politicians) among us as by supporting this unscientific fraud of Anthropogenic Global Warming?
No, you are NOT allowed to call it climate change, that is not how they started they whole scam, that is just moving the goalposts.
And finally, how much does government funded research on the settled science of AGW pay you every year? All of what you earn? Just asking for a friend.
What problem? No climate emergency. Maybe it’s to our credit that the climate is improving but probably we’re just lucky.
“””””But that is getting high as well. June, at 0.95C (base 1961-90) was by a long way the warmest June in the record,”””””
You betray your lack of physical training in measurements. You indicate that scientists focus on the global anomaly.
An anomaly is not a temperature is is a growth in temperature. To call it hot is incorrect. You imply that anything above the baseline is hot and by deduction one can say that the baseline is the best temperature for the globe and humans. Not aware anyone has actually said or even implied that.
Your statement also fails to address that people are calling it the hottest month EVER. You should point out to readers the time frame that actually has monthly temperatures for the globe.
“Not aware anyone has actually said or even implied that.”
Nor am I. You made it up. The relevant figure is that June 2023 was 0.15C or more warmer than any previous June since 1900.
In UAH there are 21 months with an anomaly more that 0.38C
Why are you SO, SO PETRIFIED of 0.38C warming out of a very cold period around 1979 ??
If I memory isn’t mistaken, it was cold in 1979, but I may be confused because this year was just as cold.
1976-79 had several years of deep snow in the winter. Something maybe of a transition period that saw warm spikes in the summer and cool spikes in the winter.
I believe that Science News frontpage with “Ice Age Cometh?” on it appeared about 1974 or 1975.
Yes, it was cool in the 1970’s, but not unpleasant, other than maybe a few larger snow storms, but that was also the time of the Human-caused Global Cooling narrative where some climate scientists were speculating that humans were causing the Earth’s climate to cool.
When the Earth started warming in the 1980’s, some of those climate scientists who were on the Human-caused Global Cooling bandwagon, got off, and got on the Human-caused Global Warming bandwagon. I guess they were just following the trend. 🙂
Among the months where the UAH v6 TLT record has an anomaly exceeding .38 C, only one of these was a past June, June 1988 during one of the two greatest El Nino global temperature spikes since the one of 1878. And, ENSO spikes are spikier in the satellite-measured troposphere than at the surface.
Also, ENSO spikes have a tendency to affect global temperature more earlier in the year than June. And, absolute global average temperature has a seasonal component, with warmer when the northern hemisphere (which has most of the land) is having summer. Although global temperature recently set a new instrumental-era record during northern hemisphere summer, I expect global surface temperature datasets to show new record high anomaly early in 2024.
Make that 1998, not 1988, oops typo!
Nick if the global average temperature eventually rises another 2.5°C, CO2 reaches 1120ppm, and we finish transitioning to thorium reactors before we run out of economically extractable fossil fuels, what will be the harm in your view?
Sorry dude, if you are going to imply that an anomaly of 0.95° C is an indication of a temperature that is too hot, then the baseline is the appropriate temperature for the globe.
I don’t imply that.
Then 0.95 above the baseline is an entirely normal and nothing to worry about temperature?
Who cares? Would anybody want to continue the devastating cold of the Little Ice Age? About 1+℃ above the coldest period of the Holocene has shown to be very beneficial to plants and animals alike. The current (justifiable?) projection of another degree or so by the end of the 21st Century should, likewise prove to be beneficial. That, and the benefits of additional fertilization by increasing atmospheric CO2 concentrations should make anybody concerned about the welfare of humanity very happy, indeed.
The apparent millennial-scale cyclic oscillations of global warming and cooling periods is being totally ignored by the socialist UN IPCC and all the Leftist governments around the world. UN IPCC CliSciFi climate models cannot duplicate the cyclical millennial-scale temperature profile of the Holocene. Yet they are telling us unequivocally that all warming since the Little Ice Age has been caused by mankind’s emission of GHG’s and land use change. All with the goal of replacing freemarket capitalism with command and control economies.
Annual average temperatures
Denver 9.0°C
Boston 10.1°C
New York 11.9°C
Atlanta 16.1°C
Miami 24.6°C
If Denver were warmed up to New York that would be a 2.9°C rise.
If Boston were warmed up to Atlanta, that would be a 6°C rise.
And the alarmists want us to panic over 1.5-2°C above pre-industrial times? Seriously?
I wouldn’t mind it if Greenland could grow barely and brew beer again.
Because getting global temperature more than 2 degrees C warmer than 1850-1900 baseline would cause a problematic sea level rise.
“The relevant figure is that June 2023 was 0.15C or more warmer than any previous June since 1900.”
Not here in Oklahoma. This June isn’t even close to being the hottest recorded.
GHCN.. all the worst surface sites, combined in the worst possible way.
It is meaningless twaddle except for propaganda purposes.
And I suspect that Nick is well aware of that fact.. shill and all.
“GHCN.. all the worst surface sites, combined in the worst possible way.”
Pat Frank has pretty much rendered that temperature record to be meaningless.
Nick, A little perspective … The oldest person alive today was born in 1916. According to NOAA the world has warmed 1.2 °C since then. If all that warming occurred in the room you were sitting in, over the time it took you to write your post, you probably could not have detected a change in temperature.
Correction. The oldest person alive today was born in 1907. She is 116 years old this year. The world warmed 1 °C since 1907.
A little more perspective. The last glaciation was about 6 °C cooler than 1916. That might make you put on a jacket. But it made a big difference to the world.
Whereabouts? Wouldn’t have made such a huge difference in the Galapagos Islands for example.
In the present climate, we have grown to 7 billion. We can’t all move to the Galapagos.
But we sure the hell can move South of the last Glacial Maximum in the Northern Hemisphere. Anyway, who cares? We won’t be around. Why does the world need busybodies to worry about everybody else?
No, we need some warming so that the populations can expand to areas that are currently too cold for most human habitation and crops.
Mankind thrived and prospered during the last glacial maximum. Man managed to migrate throughout the world in spite of the glacial maximum.
Mankind can deal with the next ice age by adaptation. Even 7+ billion can adapt if they have sources of high quality reliable energy, unlike that produced by alleged renewables.
No need at all to move to Galapagos!
If you are still pushing the false mantra of global warming, Earth’s temperatures have far to go to match the early and mid-Holocene temperatures.
To your original point, that is a good demonstration of the absurdity of a global average temperature. Only six degrees is enough to turn the North into barren tundra with miles-thick ice caps. But the 1 °C of warming over the past 100 years had no significant impact on the state of the global climate, and it’s very unlikely that another 1 °C would either. The only notable difference in the past twenty or thirty years of intense monitoring, aside from a faction of a degree of warming, is a reduction in cloud cover, which could turn out to be the primary cause of the recent warming, with CO2 playing only a minor roll.
Nick says ‘The last glaciation was about 6 °C cooler than 1916. That might make you put on a jacket. But it made a big difference to the world.’
The temperature of the Earth varies by 4 degrees up and down each year without killing us.
This year the temperature of the Earth was 13.14 degrees on 1 Jan 2023 and went up to literally the hottest day ever in 125000 years at 17.2 degrees in July 2023
A 4 degree increase.
And we still survived!
https://climatereanalyzer.org/clim/t2_daily/
You pulled this number out of your hindside.
I wondered where he got that.
Well! Isn’t that specious!?
Somehow calculate a global anomaly for the last ice age. Then try to use that falsehood to prove global warming…
There is the matter that significantly exceeding (averaged over at least 30 years) 2 degrees C of global warming from 1850-1900 average would cause sea level rise being a big problem now that we have coastal communities based on year-round sea level being stable over multiple decades to even centuries.
Sea level rise is around a pretty constant 1.7-1.8mm/year.
Grow up, and avoid the chicken-little, headless-chook type PANIC!..
Its laughable.
Anyone who uses the surface data when debating climate is a clown. It’s flaws are well known at this point. The fact is that the best dataset to humanity shows that June 2023 was colder than June 1998.
“The fact is that the best dataset to humanity”
UAH may be the best for very high flying birds, though RSS tells a very different story. But it isn’t much use to humanity, which lives down here.
Of course RSS shows a different story,
It has DELIBERATELY mal-adjusted using climate models to try to meet the climate scam.
The poor guy in charge couldn’t hold onto his scientific integrity and honest against the ranting and yelling of this fellow alarmists. Sad.
UAH is VALIDATED again balloon and other data, and sample validated against the only pristine surface data USCRN.
Anyway.. I thought CO2 was meant to cause atmospheric warming… yet the fabricated and mal-adjusted surface, warms faster.
You can’t get a thing correct, can you…
Can’t keep the lies and deceits in line..
As usual, you have nothing except your mantra as a climate agenda shill.
Yeah, and you don’t hear Nick disparaging RSS.
The UAH satellite measures from the surface to the upper atmosphere, just like the weather balloon data, with which it correlates at about 97 percent (that’s not a joke number).
Does RSS or any of the surface “temperature” data sets correlate as closely to the weather balloon data? Answer: No, they don’t, otherwise they would be UAH.
Perhaps you have not heard yet, but NOAA has recently completed a reanalysis of its STAR temperatures and it now pretty much agrees with UAH. That makes RSS the odd man out.
As to satellites not being representative of surface temps, maybe you don’t understand increasing temperatures will show throughout the lower part of the atmosphere.
Well the upper troposphere is where a faster rate of warming should occur. And since we don’t have any reliable data for “down hear”, we’re stuck with only UAH.
Exactly. The tropospheric hotspot.
Are you now being paid to claim that there is no correlation between temperatures up there and temperatures down here?
I do notice that you still ignore the many, well documented, problems with the ground based temperature network?
N. S. : I mostly agree with you, but I have found UAH v6 as reasonable and better than RSS v4, especially in terms of how their authors argued how who has better correlation with radiosondes aboard weather balloons, and which of these datasets has better correlation with JRA-55 and ERA5 reanalyses. I noticed Mears of RSS arguing in favor of RSS v4 including an older-tech satellite with an outlier warming trend on basis of improving correlation of most climate models, which have lots of problems including majority overpredicting global warming. Radiosonde data that I looked at seems to indicate “troposphere truth” is at least 75% UAH v6, at most 25% RSS v4.
Good comment.
Really!?
What story is that!?
Then, there are the reasons RSS tells a different story, all alarmist model fantasies and global delusions.
Changes from RRS v3 to RSS v$
Nah, not at all agenda driven 😉
By any chance is your “best dataset to humanity” USCRN? That actually has impressive agreement with USHCN-adjusted. Meanwhile, the 48-states US had June not being unusually warm as this June was globally. The 48-states US easily has temperature diverging from global temperature even for a whole month sometimes more, for example in June 2023 according to

No the USCRN is adjusted to match the other set. Given that at least 96% of US weather stations are artificially corrupted, it’s pretty much impossible for those two to have such close agreement. Those who argue for pairwise homogenization either don’t realize unless you are working with weather stations that are free of homogenous bias, you are adjusting bad data to look like other bad data or deliberately ignore it and just say it’s good data.
Can you cite USCRN being adjusted to match “the other set”? Proponents of USCRN say it’s unadjusted because it doesn’t need adjustments.
“The 48-states US easily has temperature diverging from global temperature even for a whole month sometimes more,”
The U.S. has been cooling since the 1930’s, according to James Hansen (before he became a temperature data mannipulator and climate change liar).
Hansen 1999:
“has impressive agreement with USHCN-adjusted”
ROFLMAO.
Yes, USCRN is now controlling the US temperature.. so they have levelled off.
How stupid would the scammers look if the fabrications of USHCN kept getting warmer and warmer, while USCRN, the pristine sites, went nowhere.
WAKE UP !
Which number above did you apply the “think of it and double it” rool ( i.e. the Molesworth Institute of Applied Science Guesswork )?
Why do you read WUWT, Stokes?
For a laugh, I guess. That’s why I do it.
Your comments are certainly only ever worth ridicule.
You get totally trashed on any aspect of science… every time.
So you run off giggling to yourself like a 5 year old girl…
..
Me, too! I laugh every time I read an article at WUWT, for one reason or another. There are some really humorous people here, for one reason or another. Some of them don’t intend to be humorous, but they are. I’m laughing now, writing this comment.
And I come to learn something, too.
He mostly doesn’t read. It’s evident in his comments bit above and in other threads that NS avoids much of the reading and mostly focuses on commenting.
We once learned in English Literature class that many poets were paid by the word. One wonders about how trollop salaries are calculated.
How exactly is the above chart promoting the notion of an accurate, world wide temperature average?
Nick, are you now being paid by the post? Because the quality of your lies has been going down.
“Average global temperature is a concept invented by and for the global-warming hypothesis. It is more a political concept than a scientific one.” – That!
“Average global temperature is a concept invented by and for the global-warming hypothesis. It is
morea political conceptthan a scientific one.”There fixed it for both you and him.
First the temperature change was statistically insignificant s. The previous record of 16.92°C (62.45°F), set in August 2016 was exceeded by just 0.09°C (0.17°F) reaching 7.01°C (62.6°F). An insignificant temperature change but hyped in click-bait media as sizzling heat.
Also, understand how the natural climate oscillation between a La Nina-like ocean to a El Nino-like ocean affects global temperatures.
NOAA’s 3.4 index shows due to that oscillation, La Nina-like conditions caused tropical Pacific temperatures in June 2022 to be -1.0°C (-1.8°F) below average, by June 2023 ocean temperatures rose to +0.5°C (+0.9°F) above average. A difference of +1.5°C (+2.7 F).
This doesn’t mean the world got hotter. It means cold sub-surface water that typically upwells during La Nina-like conditions was no longer reaching the surface and lowering the average.
Jim, Doesn’t the Pacific warm pool vent heat into the atmosphere during on El Niño? Doesn’t that mean the atmosphere/ocean system is cooling, as that heat radiates away to space? The atmosphere warms when the heat moves through it, but the overall system cooled. If all that is true, it’s another indication of the absurdity of a global atmospheric temperature. It goes up when the system cools!
The further north the warming occurs, the more effective the radiating to space is.
Global temperature recently slightly exceeded the previous record in 2016 (according to some global temperature datasets) that depended on one of the two greatest El Nino global temperature spikes after the one of 1878.
Not much to add to Milloy’s takedown.
A few supporting observations.
Confounding weather with climate is about all alarmists have got left. Nothing else they predicted about climate change in the past 35 years has come true.
Great stuff as usual, Rud. And we should also consider: 16.02C plus 273.15 is 290.07K divided into 0.09 gives us 0.00031/100 = 0.031% change, or 31 one hundred thousandths. Big Whoop-De-Doo.
Same as 1.2 divided by 290.07 is 0.0041/100 = 0.41% for the temperature increase since the Little Ice Age. Using ℃ or ℉ gives bogus percentages only fit for propaganda.
The recent global temperature record happened according to multiple sources, including the ERA5 reanalysis that uses the ECMWF weather model. https://twitter.com/CopernicusECMWF/status/1676934454877495296
July 124977BC was a particularly cool month, globally, partly because of exceptionally high volcanic activity over the preceding 17.5 months. It is absurd to claim that 3 July 2023 being 0.01 deg C warmer than 3 July 124977BC is in any way significant.
Very funny!
But did you account for the fact that we using a different calendar in 124,977 BC?
“Hottest Day Ever” clearly means ‘Hottest Day in the Relevant Period’. That is relevant to us, not any other species.
What is ‘the Relevant Period’?
One of these perhaps;
A) Since hominids evolved.
B) Since modern humans evolved.
C) Since trade was invented.
D) Since agriculture was developed.
E) Since navigation beyond sight of the coasts was common.
F) Since the nation state was invented.
G) Since the agricultural revolution.
H) Since the industrial revolution.
I) Since the mastery of electricity.
J) Since the standardisation of global trade (the packaging container).
K) Since the invention of the internet.
L) Since the world’s population reached 7 billion.
M) Since the world’s population reached it’s peak (not yet…)
N) Since some other starting point that you can define and justify.
In conclusion, the definition of the “Hottest Day Ever” is subjective, not objective.
It sounds like an objective measurement, but what that measurement is compared to… is subjective.
No. The surface of Earth was hotter on July 3rd than is was when the surface consisted of magma oceans. Stop denying science!
https://en.wikipedia.org/wiki/Magma_ocean
OK. Mankind was wiped out when the surface consisted of magma oceans.
Thank God the Elves and Pixies recreated humankind after that.
And thank you for defending science.
There’s no such thing as Elves and Pixies.
Tony Stark must have snapped his fingers again.
Can we have an average temperature for an ocean, a continent or an island ? How about for just a town or just at a single measuring station. At what physical dimension, or what number of measuring stations can we no longer calculate an average
Unless the sites remain exactly the same, you certainly cannot compare temperatures against time.
And we know that a very large proportion of sites have been change physically and by their surrounding.
The surface sites are totally unsuitable for determining any global temperature change in any meaningful way.
Yeah, but the corrections the keepers of the data make always lead to a warmer result as demanded by their paymasters. In a couple-three decades it would be fun to read the memoirs of some of the data manipulators and CliSciFi climate model jockeys.
This statement is demonstrably false:
Are you honest enough to admit that?
Are you honest enough to admit it made the warming steeper from 1960…
or can’t you understand your own charts.
And that they got rid of the 1940s peak that existed once, but had to be got rid of.
You have NOTHING, as usual, gormless oaf !
And that black line is NOT raw data, it is already mal-adjusted many times.
And nothing from Zeke can be taken as reliable or trustworthy ortruthful….. he is low-level climate con-artist of the worst type..
“And that black line is NOT raw data, it is already mal-adjusted many times.”
Good point.
Is this the best response you can mount? I think you might benefit from doing a little self reflection into why you cling to ideas you cannot defend.
*Here is the code to generate the above analysis yourself:
https://goonlinetools.com/snapshot/code/#yjdt53cu83c0vfuucnrz4ag
LOL, they really have you CONNED don’t they
They are all based on the same surface data, which is TOTALLY UNFIT for any sort of reliable climate measurement… and apart from that, there is massive COLLUSION to fake the data.
We know that from the ClimateGate emails.
The fabricated data sets are an agenda drive load of mal-adjusted and manipulated tainted data..
It is JUNK, that bares basically zero resemblance to any real temperatures.
I suspect you are well aware of that, but your pay doesn’t allow you to say the truth.
I’ve shown you analysis of the raw data that I compiled myself. It is beyond absurd to try and suggest that the data have been manipulated improperly. Review my computer code. Point out the manipulation. Or you can save yourself the embarrassment and admit that you are wildly out of your depth in this discussion, but I have a feeling you won’t spare yourself that.
nd, again, you are completely ignoring the fact that the raw data show more global warming than the adjusted data, and ignoring the fact that the adjustments are equally as likely to lower the trend as to increase it, both points that completely and irrevocably destroy your position.
Raw data from most of the NH and many places in the SH shows the 1940s similar temperatures to around 2000.
The graphs are FAKED to remove that warm period.
We know that for a FACT.
Pity you don’t pay more attention to the scam the climate glitterati are perpetrating.
Or are you PAID not to notice !
I am showing you the raw data for the entire globe. It does not show similar warmth in the 1940s compared to today. If you think the analysis I’ve provided above has been faked please prove it. I’ll eagerly await the comedy that is sure to result.
And even a blind monkey can see that after 1979, they bear no resemblance whatsoever to real temperatures, anywhere.
They are totally FAKE. !
Do you think any of them are going to come clean?
I have my doubts.
Furthermore, Milloy debunks the fallacy of ‘average global temperature’. He states,
Seasonal changes and regional disparities, as Milloy points out, severely compromise the concept of a singular ‘global temperature.’ For instance, temperatures rise during the Northern Hemisphere’s summer due to increased land exposure to sunlight, a fact conveniently overlooked by the climate alarmist contingent.
Ive yet to see a skeptic anywhere properly understand what a global “average” is.
Hint: its not an average, but rather an expectation. or prediction
When i tell you the average 2meter air temp is 16C that means
is operationally defined as the following.
if
you pick any number you like X
you pick any x y location you like
if you use a perfect instrument
you will record a temperature Y
such that 16-Y is less than X-Y. in other words 16 is the best prediction we have
of the temperature at a randomly selected location
THAT is the precise operational MEANING of averagw global temp and there is only one
way to challenge it.
you cannot criticize what you misunderstand.
X = 15, Y = 14, C = 16
16 – 14 < 15 – 14
-2 < 1
____________________
X = 15, Y = 14, C = 18
18 – 14 < 15 – 14
-4 < 1
____________________
X = 20, Y = 18, C = 16
16 – 18 < 20 – 18
-2 < 2
____________________
X = 20, Y = 18, C = 18
18 – 18 < 20 – 18
0 < 2
____________________
Guess what, your little math trick works for any global temperature that is chosen. It certainly doesn't prove "16" is the only choice!
Yes, Steven. ?Sure, Steven (backs away slowly)
“such that 16-Y is less than X-Y” only means that X is greater than 16. Nick, you’re going to have to describe the derivation of globally averaged temperature better than that.
Nick probably can, but Mosh can’t.
I’m waiting for his buddy bignotingtechnician to come along and tell us why he’s right.
Why don’t you and Stokes get a room?
The trans-child produced would inherit some serious mental issues.
Oh look, Mosh, a total non-mathematician, pretending he understand mathematics by inventing spurious nonsense.
The fact you even think a realistic or meaningful “global average” can ever be calculated from a sparse ever-changing set of surface site readings, is…
… to put it mildly…. TOTALLY HILARIOUS !
I don’t think one can measure the temperature of a planet.
To quote —
But temperature averages are not meaningful, except as a hand-waving proxy for heat (Joules). That’s because temperature is an intrinsic property of matter and the world is composed of different materials, in different phases, is heated on one side and rotating.
Q.What is the average temperature of an iceberg, a lake, a building, a mountain, some water vapor and a large rock?
What is the smallest particle known to humans?
Years ago I had an equation that proved that 1=2.
The trick was that buried in the processing, there was a divide by zero that was hidden by replacing real numbers with letters.
This is easily disproved using the logic taught in high school geometry.
Assertion:
(16 – Y) < (X – Y)
1) Add "+Y" to both sides
2) (16 < X)
3) Let X = 15
4) (16 16″]
A more general statement is;
(C – Y) C)
There is no proof here as to what an absolute global temperature is.
There appear to be some formatting problems 🙁
I tried this twice and it looked ok last night. I tried this again and am having to edit it. Greater than and less than signs seem to mess the eiditor up.
Basically, Item 4 should show “16 lss than 15”.
This counter example proves the statement FALSE.
It is necessary to place a restricton on values in order to meake the equation true in all cases. That restriction would be “X greater than 16).
A more general equation would be”
(C – Y) is less than (X – C) when X is greater than C.
There is no proof here as to what an absolute global temperature is. As long as X is greater than C, temperatures could be anything.
Use the unicode characters and the WUWT test page under the ‘About’ menu drop down item in the menu.
There you can test the formulae before placing them into a comment thread.
“Special characters in commentsThose of us who remember acceptance of ASCII-68 (a specification released in 1968) are often not clever enough to figure out all the nuances of today’s international character sets. Besides, most keyboards lack the keys for those characters, and that’s the real problem. Even if you use a non-ASCII but useful character like ° (as in 23°C) some optical character recognition software or cut and paste operation is likely to change it to 23oC or worse, 230C.”
There are numerous unicode sources, https://www.webnots.com/alt-code-shortcuts-for-superscript-and-subscript/
My PC blew up and it will be a while before I can afford a new MB, Memory, and CPU. I am using an Android tablet and something in it just doesn’t handle the less than and greater than signs correctly. I have a bunch of math characters stored in my Onenote app so I’ll have to put those in it and use copy and paste rather than the characters on the keyboard.
“…and it will be a while before I can afford a new MB, Memory, and CPU”
Admittedly, we all have our travails and unexpected circumstances. But as with your blown PC, this just does not compute. Between you and Tim, at least one of you claims engineering cred, and at least one of you lives in or comes from a low cost, midwestern state. From previous comments, I presume that you are mature, and perhaps retired. So, WTF do you not have $ saved for a relatively modest purchase? Certainly one that you depend on for your quite frequent contributions here. Apparently Willis, from his comment about being economically stuck in the Cal sticks, is in the same leaky boat.
Your Honor, the question goes to general credibility and judgement….
Nice ad hom dude.
AGAIN, bad luck happens. But WUWT posters sure seem to have more than their fair share.
Not ad hom by any measure. After an “engineering career” in a low cost state, you’re still in that group that can’t come up with $400. Willis is economically stuck in the sticks in Cal, and can’t get up/out. David Middleton is on the chopping block at his failing firm. It all goes to the fact that many of the self anointed experts here don’t seem to be able to manage their personal affairs. It taints their overall cred. Cred assessment of posters anywhere is key, and lots of fails in WUWT.
Again great ad homs. You do know that is an argumentative fallacy don’t you?
You have no idea about any of the circumstances in any of these people’s lives, including mine.
You are making assumptions based upon nothing. Same thing you do with climate temperature measurements
–
So the average is not a real number, it’s just your best guess as to what the number should have been.
All the more reason to ignore it.
Why should we bother when you get spanked on this topic, many times. Obviously, you misunderstand what you are trying to promote.
your sons and your daughters are beyond your command
So, the new “hottest temperature ever” is a SWAG (scientific wild ass guess).
Yup but not so much of the ‘scientific’.
Lets call it P-SWAG: Pseudo-Scientific Wild Assed Guess.
I like that!
Thank you, it only took 75 years of development.
Agenda driven, most certainly
I suspect someone new has taken over at the data fabrications done at U.Maine, and is a rabid PAID alarmist.
People keep telling us it’s “the hottest evah!” as though it’s a bad thing. If FF have helped warm the world a little bit, that’s fine and dandy with me.
How hot is it really?
I heard the prick being depicted every morning I was near somebody’s radio, which wasn’t often given our combat missions.
Love that “Martha and the Vandellas” song!
Have you all noticed the incessant barrage coming from every direction in the past week about this nonsense. It’s coming from all the usual suspects and they are piling and pouring it on. It’s ridiculous and nonsensical, but so many of their yup yup acolytes guzzle it up and call anyone incredulous names.
Leftwing billionaires are paying a lot of money to news organizations to propagandize the public over human-caused climate change.
If there was a good spread of good station data, averaging a continuous temperature measurement, then that would be an indicator of global temperature change. I suspect, though, that even a station for every 10, 000 km2 would not be enough for a 0.1°C precision.
The average might not be something with physical meaning but it would be safe to assume that with an almost infinite number of stations evenly spaced, an increase is because of increasing heat energy in the lower troposphere rather than a change in how the energy is distributed.
Even the satellite measurements are not quite good enough that the final results aren’t affected by subjective opinions on how to make the most out of data not quite fit for purpose.
But when you use the average of just minimum and maximum temperatures that can vary by degrees over short distances, and then you need to fabricate a well spread record from stations that have a poor record and are concentrated in high population areas, it becomes a sad joke. You can’t treat it as an intensive property. The temperature of a uniform sample is but this average isn’t. So not only is the average meaningless, any method to infill for missing data is highly dubious.
I looked at my local stations. The daily maximum temperatures can vary randomly between two stations only 8 km apart by a degree C in the summer months. Each one usually has a maximum over half a degree more than the highest half hour reading given. The difference between the two would be due to things like different cloud cover it at that particular point in time.
I live only 2 km from one and despite frost being every where one morning, covering the roofs of cars, the station only recorded one half hour reading below 0, and that was half a degree.
It is not purely random error for the differences. The average of many samples does not reduce these errors.
And then you have the poor record, at the present time even! I remember checking out a map of temperature anomalies for Australia for a record breaking month. There were two large areas of well above average temperatures that significantly increased Australia’s whole average. One had no station in it. The other had one station with not even one month of complete data in the base period for the anomalies, and most with no data at all. The past record is just not good enough for it to be meaningful.
Things are better with modern automated stations, but still it’s cocked up by humans. There was a story the other day of the coldest morning recorded in Richmond NSW Aus, but BOM has no minimum recorded for that day, and a few others in that cold period. I had my own revelation when I noticed that my hometown had three days in a row at 0.1°C above the record coldest temperature since 1945. Not even the claim that more heat records are broken than cold records is meaningful when extremely cold temperatures are treated as dubious just because it should be warming, while fighter jets landing in succession, switching from LIG to electronic or reflecting sunlight on to a station with stupidly placed solar panels is ignored.
Then there is the comparison with something from proxies for when there wasn’t even a crappy record! How stupid that is is best exemplified by a recent story on receding glaciers leading to the discover of ancient weapons and an ancient highway. They first appeared in 2013 but researchers were unable to go back until 2018. I doubt that there was an ancient highway that was only used every five years so at least that part of Europe was once much warmer than now and less than 1000 years ago.
The NOAA polar-orbiting satellites have very poor coverage of the globe outside of the arctic/antarctic and high latitudes. Sampling at 30° latitudes can be as sparse as once every three days. It is worse at the equator.
Dr. RG Brown?
If so, we’ve missed you tremendously!
Adjusted T data can have, does have extreme variability compared to unadjusted.
Example: A random year, 1953, of the Tmin data from Alice Springs, in the centre of Australia and a key station because of a long record in a remote location.
The graph shows how much various adjusted ACORN-SAT versions differ from raw and from each other.
We are looking at DEGREES C of difference from one version to another.
The scare mongers are talking tenths or hundredths of a degree C invloved in setting an alleged new hottest evah!
It makes no mathematical, logical or scientific sense. It is simply wrong.
It is going on everywhere.
Geoff S
https://www.geoffstuff.com/aliceadjust.docx
I have nothing but respect for Steve Milloy, I have been following him for decades. Steve is fearless, he will appear with and confront anyone. I only wish he would carry charts and graphs showing that weather is’t more extreme or extreme weather isn’t more frequent or showing fewer people dying from extremes. Another thing he needs is to put together a short clear example of how useless averages are.
I like to check the references in such articles as these ones currently splattered all over the media within hours of each other. Typical for climate alarmist articles the references are poor. But I found the source article –
https://climatereanalyzer.org/clim/t2_daily/
Take note of the special notice – this data was never intended to be an ” official” observational record. So the record hot wind is coming from the usual corner – the propagandists.
The website which showed the ‘hottest day ever’
‘ The increase in mean global temperature since the start of July, estimated from the Climate Forecast System, should not be taken as an “official” observational record. It is important to note that much of the elevated global mean temperature signal in recent days can be attributed to weather patterns in the Southern Hemisphere that have brought warmer-than-usual air over portions of the Antarctic.’
Their map for the Antarctic shows that the Antarctic was -24 degrees , when normally it is -29 degrees at this time of year.
This is rather like saying my kitchen is the hottest it has ever been, because I have defrosted the freezer and the freezer is now at 0 degrees when it used to be minus 30 degrees inside the freezer.
No one would say that the property where they live had an average temperature, A freezer with a temperature of -18 Celsius, refrigerator with a temperature of 3 Celsius, a greenhouse with a temperature of 35 Celsius, a south facing garden with a temperature of 28 Celsius, a temperature of 25 degrees in the shade and a temperature of 20 Celsius in the house.
I can imagine it appearing on an estate agent/realtor listing as “this property has an average temperature of 22 degrees in the summer but only 10 degrees in winter.”
I am replying without reading the WSJ piece. Leaving aside the discussion of appropriateness of a mean global temperature, the issue here is pretty straight forward. You cannot, under any circumstance, provide an estimate of the global daily average temperature with two decimals. It is simply not possible, unless of course you believe in Excel or some programming language.
If you are competent in measurements you know this.
A mean of measured values where all measurements have errors and uncertainty shall always be accompanied by an error and uncertainty range. The uncertainty in the mean can not be less than the uncertainty in the individual measurements, unless you fulfll the strict requirement of having multiple measurements. For temperature in open air this needs to be multiple measurements at the same time. This does not exist anywhere on any meteorological station on this planet. Multiple in this case is not 2 or 3. It is at least 10, and to achieve two decimals it is more than 1000. Simultaneous measurements, at all stations used.
But the more important part here is that you cannot, under any circumstance, provide quality assured measurements from the entire globe in less than 24 hours. No meteorological service provides that. And that is why the error bars would render the “mean” useless on this point alone.
Apart from that, the actual global annual mean temperature itself is estimated by several groups and they cannot seem to get it closer than a couple of degrees between them. That is why they turned to anomalies, since they seem to get those closer. But even there they disagree by the several tenths of degrees.
I think Jones et.al around 2012 said the the global annual mean temperature is somewhere around 13.5-14 degrees Celsius. And they thus did not bother figuring it out, but tossed it overboard and went with the anomalies instead.
Not in climate “science”, of course, where any and all values are assumed to be 100% error-free.
I’ve developed a temperature reading about temperatures. Must be running a fever..
The reanaliser website that I’ve viewed (NZ 10/7/23 2145h) has on this link https://climatereanalyzer.org/clim/t2_daily/ comments as follows …… “Special Notice, 8 July 2023
Climate Reanalyzer is a data visualization website for climate and weather models and gridded datasets. Climate Reanalyzer is NOT a model.” ……
This is at variance with Milloys statement about reanaliser using models.
Any comments?
It would be useful to point out anomalies in the relevant data sets
Not sure where 125,000 years ago data in this particular link is.
‘Climate Reanalyzer is a data visualization website for climate and weather models’
The word ‘model’ is in the quote
The saga of my tree vs the washing-line pole continues and deepens in mystery
(All the while trashing the notion that temperature controls climate)
I have 2 identical dataloggers,Accurate to 0.1°C in resolution and that they agree with each otherOne is affixed under a stand-alone (specimen) coniferous tree about 10m tallOther is affixed in wide open air/space off a wooden pole meant for the washing lineThey are about 30 metres apart and the whole area is (not really short) mown grass lawnVery rural location about 2 miles North of a small town.3metres above Mean Sea LevelThey record air temps at 3 minute intervalsWhat you see in the attached picture is a plot of the difference in temps that they recorded in the week starting 3rd July
The numbers on the x-axis is ‘Time’ ……..where 09 = 09:00 BST (UTC+1)………and 21 = 21:00 BST (9 in the evening UTC+1)Vertical axis is degrees CelsiusStraight off you see that it is cooler under the tree during daytime and warmer under the tree at night – by quite some good amount,
OK. Statistics
There are over 3,000 data points from each thermometer from the week’s exertions…
For the tree:
Add them all up and divide by ‘n’ gives an average of 17.8°CFind the Max and the Min – average of those 2 numbers = 19.5°CThe thermometer itself calculated an ‘MKT‘ of 17.86°CFor the wooden pole
Add them all up and divide by ‘n’ gives an average of 18.0°CFind the Max and the Min – average of those 2 numbers = 20.4°CThe thermometer itself calculated an ‘MKT‘ of 18.06°CMKT = Mean Kinetic Temperature (look it up and tell us what you find)
Most immediately notable is the difference in the Max/Min average vs the ‘all data’ average = nearly one whole degree Celsius.
Is Jennifer in the house?
MKT is an interesting concept – I ain’t quite got my head around it yet – help me out.
It is very interesting how despite the large day/night differences, the (add them all and divide) averages came out soooo close
But anyway – usual query = What was the temperature of my garden during the week of July 3rd through to 01:00 July 10th?
How might that mish mash of different numbers affect how anyone else measures and records temperature?
edit to add:
I chose those 9/21 times as tick-marks because of El Sol.
Not that he’s got anything to with climate but you’ve gotta have some sort of anchor/reference point
Esp, sunset in this part the world is around 21:30 (UTC+1) these days
Nature rules, models drool.
My take on the “hottest day ever controversy (?)” is that it’s mostly distracton – the warming trend is continuing its steady march. Skeptics are using every obfuscation trick in their book to try and keep people from noticing this. Milloy is a master at obfuscation and misinformation. That’s why fossil fuel interests and think tanks pay him so much to influence public opinion. That’s why WUWT readers continually fall for his nonsense.
“Milloy is a master at obfuscation and misinformation. That’s why fossil fuel interests and think tanks pay him so much to influence public opinion.”
Attack the messenger. Standard Operating Procedure. Got any evidence that Milloy is selling his influence to oil companies?
I’m not attacking Milloy. On the contrary, I’m complimenting him – he’s very good at his job of being an energy industry lobbyist. It’s rather odd that this person whose words you uncritically swallow is someone about whom you know apparently literally nothing. Does it ever concern you that protecting your belief system requires you to remain willfully ignorant of the people you view as thought leaders?
I don’t view people as thought leaders, I just view their thoughts. That’s how I judge things: What they say, not who they are. I don’t care who they are, all I care about is if they make sense or not.
I think it is you who is the one mesmerized by thought leaders.