Guest easy by Larry Hamlin
The year-end 2023 global average temperature anomaly measurement data outcomes have climate alarmist media falsely claiming these results established that “2023 Was the Hottest Year on Record” since records began in the mid-1800s.

These “hottest year on record” claims are based on misrepresenting the year 2023 obscure “global average temperature anomaly” outcome that is not applicable to any specific location or region on earth.
NOAA’s Global Times Series temperature anomaly data are available for 16 global regional areas listed below.

Table 1 information provided below shows all 16 NOAA Global Time Series regional average temperature anomaly data for year 2023 and identifies whether the 2023 measurement was a record high anomaly value.
For global regions that had a record high year 2023 average temperature anomaly their prior highest region temperature anomaly value and year are shown. For global regions that did not have a record high average temperature anomaly in 2023 (with these global regions noted in all capital letters in Table 1) their highest regional average temperature anomaly value and year are provided.

Table 1 shows 7 of NOAA’s 16 global regions did not have a year 2023 highest average temperature anomaly outcome including Asia, Europe, U.S. (addressed later), Oceania, East N. Pacific Region, Hawaiian Region, Arctic, and the Antarctic.
NOAA’s year 2023 global regional average temperature anomaly result across its 16 selected regions demonstrates the significant average temperature anomaly variation differences around the globe that are driven by the disparate climate behaviors associated with each of these regions. These disparate global regional average temperature anomaly varying outcomes are concisely displayed in Table 1.
The global wide average temperature anomaly outcome conceals the significantly varying global climate regional differences with the alarmist claim of “highest ever measured global average temperature anomaly” masking the more complex and complete picture of the global average temperature anomaly outcomes that are displayed by NOAA’s data in Table 1.
Most significantly, NOAA’s global regional temperature anomaly data establishes that the assertion of year 2023 being the “highest ever measured average temperature anomaly” claim did not occur across the globally dominant regional land areas that include Asia, Europe, U.S. (addressed later), Oceania, Hawaiian Region, Arctic and the Antarctic.
The Table 1 average temperature anomaly data establishes that the highest global regional anomaly values vary across many different years including 2007 (Antarctica Region), 2015 (East N. Pacific & Hawaiian Region), 2016 (Arctic Region), 2019 (Oceania Region) and 2020 (Asia & Europe Regions).
Note the large range of average temperature anomaly values (a factor of more than 17 difference) that exists between the Arctic (2.55 degrees C) versus the Antarctic region (0.15 degrees C) for the year 2023 average anomaly measurements between these global polar regions.
As shown in Table 1, NOAA’s year 2023 Global average temperature anomaly of 1.18 degrees C (corresponding to an absolute average temperature 15.08 degrees C relative to the NOAA year 1901 to 2000 global average temperature measurement baseline period average value of 13.9 degrees C from which each years global average temperature anomaly is determined) is only 0.15 degrees C (0.27 degrees F) higher than the prior highest 2016 year average temperature anomaly value of 1.03 degrees C as shown below (corresponding to an absolute average temperature of 14.93 degrees C).

Yet this small global average temperature anomaly difference of 3/20ths of a degree C from the prior highest year anomaly value of 2016 (which is an El Niño year just as is year 2023) is hyped by climate alarmist media as representing dangerous increased “record heat” and the “Earth’s hottest year on record” (instead of “highest average temperature anomaly on record” which represents an “average temperature “outcome instead of a “maximum temperature” outcome that is required to make a valid claim of “hottest year on record”).
Alarmists further exaggerate global “average temperature anomaly” outcomes by claiming they represent a “global climate emergency” where “temperatures during 2023 likely exceed those of any period in the last 100,000 years” with all this ridiculous climate alarmist propaganda based on an increased 2023 global “average temperature anomaly” difference of 0.15 degrees C from year 2016 levels.
The year 2023 flawed climate alarmist claims of “hottest year on record” are in the same vein as the flawed claims made by climate alarmists about the summer of 2023 being “the U.S. hottest summer ever” that was addressed here and shown below.

The article notes the failure of climate alarmist to evaluate maximum summer temperatures (Tmax) instead of average summer temperatures (Tavg) with these latter temperatures influenced more by increased average minimum temperatures (Tmin) increases (as shown below for U.S. June through August summer 2023 temperatures) driven by Urban Heat Island impacts from U.S. population growth since 1895 as addressed in the article.

These huge growths in U.S. population are also reflected even more so in the world population growth over this period.
The huge increasing global population density growth outcomes that have occurred over NOAA’s temperature anomaly measurement period from the mid-1800s to 2023 and the impact of this population density growth on global surface temperatures is unaddressed in NOAA’s year 2023 report.
This important climate science area is addressed by climate scientists including Dr. Roy Spencer who has performed analysis evaluating increased urban population density growth impacts on surface air temperature warming defined as Urban Heat Island (UHI) impacts.
Dr. Spencer notes the following regarding the impacts of UHI on warming in U.S. population centers:
“As I previously announced, our paper submitted for publication on the method showed that UHI warming in the U.S. since 1895 is 57% of the GHCN warming trend averaged over all suburban and urban stations.”
This analysis shows that summer warming in U.S. cities is exaggerated by 100% over the period from 1895 to 2023.

Long term global increasing population density UHI effects on surface air temperatures exist at all locations across the world resulting in warmer urban areas versus suburb locations as presented Dr. Roy Spencer’s analysis here.
“The quantitative relationships between temperature and population are almost the same whether I use GHCN raw or adjusted (homogenized) data, with the homogenized data producing a somewhat stronger UHI signal. They are also roughly the same whether I used data from 1880-1920, or 1960-1980; for this global dataset, all years (1880 through 2023) are used together to derive the quantitative relationships.”
“Here are some examples of the UHI dataset for several regions, showing the estimated total UHI effect on air temperature in the years 1850 and 2023 (I have files every 10 years from 1800 to 1950, then yearly thereafter). By “total UHI effect” I mean how much warmer the locations are compared to wilderness (zero population density) conditions. I emphasize the warm season months, which is when the UHI effect is strongest.”

The UHI effect of population density growth is reflected in color coded world map global regions which display increasing UHI temperature impacts from 0.01 degrees C to 2.7 degrees C as noted in the legend. As years progress and population density increases occur around the world, regional color patterns change from yellows, to greens, to reds, to purples, etc. denoting ever increasing UHI temperature increases over time.
These increasing temperature UHI impact temperatures are clearly of significant relevance compared to NOAA’s range of modest global average temperature anomaly measurement changes over time which are typically measured in tenths of a degree C varying over multiyear periods.
Furthermore, Dr. Spencer notes that the world’s population is increasingly moving to urban centers with that reality bringing higher UHI temperature impacts that are erroneously hyped by climate alarmists and their media advocates as being driven by “global warming” when he concludes:
“Over 50% of the population now lives in urban areas, and that fraction is supposed to approach 70% by 2045. This summer we have seen how the media reports on temperature records being broken for various cities and they usually conflate urban warmth with global warming even through such record-breaking warmth would increasingly occur even with no global warming.”
Asia’s highest average temperature anomaly was 2.21 degrees C that occurred in year 2020 as shown below from NOAA’s Global Time Series data with a year 2023 average temperature anomaly value of 2.09 degrees C as shown below. Asia has by far the largest land area and dominates global population growth across the world as addressed in Table 2.

Asia’s year 2023 average temperate anomaly was 0.12 degrees C below its prior highest year 2020 average temperature anomaly value despite the UHI increasing temperature impacts that are present in the Asia global region as shown in Dr. Spencer’s UHI diagram below for India and China which portrays the UHI effect on increasing air temperatures between 1850 and 2023 (large areas with between 0.2 and 1.6 degrees C of UHI temperature increases) in this huge global region.

Europe’s highest average temperature anomaly was 2.16 degrees C that occurred in year 2020 as shown below from NOAA’s Global Time Series data with a year 2023 average temperature anomaly of 2.15 degrees C.

Europe’s year 2023 average temperature anomaly was 0.01 degrees C below its prior highest average temperature anomaly year in 2020 again despite the UHI temperature impacts that are present in the Europe global region as shown in Dr. Spencer’s UHI diagram below (large areas with between 0.1 and 0.8 degrees C UHI temperature increases) between 1850 and 2023.

North America’s highest average temperature anomaly was 2.01 degrees C that occurred in year 2023 as shown below from NOAA’s Global Time Series data with the prior high average temperature anomaly in 2016 at 1.99 C.

North America’s year 2023 average temperature anomaly was 0.02 degrees C above its prior highest average temperature anomaly year of 2016 again despite the UHI temperature impacts that are present in the United States global region that are shown in Dr. Spencer’s UHI diagram below (large areas with between 0.1 and 0.8 degrees C UHI temperature increases) between 1850 and 2023.

Oceania’s highest average temperature anomaly was 1.34 degrees C that occurred in year 2019 as shown below from NOAA’s Global Time Series data with a with a year 2023 average temperature anomaly of 1.29 degrees C.

Oceania’s year 2023 average temperature anomaly was 0.05 degrees C below its prior highest average temperature anomaly year of 2019 again despite the UHI temperature impacts as shown in Dr. Spencer’s UHI diagram below (developed areas with between 0.1 and 0.8 degrees C UHI temperature increases) between 1850 and 2023 for Australia.

The Hawaiian Region highest average temperature anomaly was 1.16 degrees C that occurred in year 2015 as shown below from NOAA’s Global Time Series data with a clear downward temperature anomaly trend since 2015 (ignored and concealed by alarmists). The Hawaiian Region year 2023 average temperature anomaly was 0.66 degrees C below its prior highest average temperature anomaly year of 2015.

The Arctic Region highest average temperature anomaly was 3.00 degrees C that occurred in year 2016 as shown below from NOAA’s Global Time Series data with a clear downward temperature anomaly trend since 2016 to 2.55 degrees C in 2023 (ignored and concealed by alarmists). The Arctic’s year 2023 average temperature anomaly was 0.45 degrees C below its prior highest average temperature anomaly year of 2016.

The Antarctic Region highest average temperature anomaly was 0.65 degrees C that occurred in 2007 as shown below from NOAA’s Global Times Series data which clearly shows a 16 yearlong downward temperature anomaly trend since year 2007 (ignored and concealed by alarmists). The Antarctic’s year 2023 average temperature anomaly was 0.50 degrees C below its prior highest average temperature anomaly year of 2007 at 0.15 degrees C.

Table 2 below provides data for NOAA’s global climate regions that have the largest land area and populations with this data establishing that at least 58% of the earths land surface (Asia, Europe, U.S., Oceania and Antarctic with a total land area of 33.69 million square miles out of the global total of 57.80 million square miles) did not experience the hyped “highest ever recorded” average temperature anomaly outcome in 2023 and that the population of these huge global regions (nearly 5.86 billion people) represents over 73% of the 8 billion people now living on earth.

NOAA average temperature anomaly data for the Contiguous U.S. (shown below) clearly indicates there is no increasing average temperature anomaly trend in the U.S. with the January to December 2023 average temperature anomaly outcome exceeded by numerous years including year 2016.

Furthermore, NOAA’s Contiguous U.S. Maximum temperature data for year-end 2023 (provided below) shows no “highest ever recorded” maximum temperature for the U.S. (8th highest in rank) in 2023.

Additionally, temperature data for NOAA’s 9 U.S. Climate Regions shown below establishes that none of these 9 climate regions experienced a “highest ever recorded maximum temperature” during 2023 (NOAA data link same as above with “Region” selection option).

Furthermore, NOAA’s year 2023 temperature data for U.S. states (with California shown below which experienced a 2023 maximum temperature that was only the 71st highest out of 129 highest maximum temperatures) establishes that 50 of the 52 states did not experience record high maximum temperatures in year 2023 (only Maryland & Louisiana are exceptions).

NOAA’s data for the average temperature anomaly temperatures for the Contiguous U.S shows no increasing average temperature anomaly trend and no maximum absolute temperature in year 2023 with the highest ever maximum Contiguous U.S. temperature occurring in year 2012 more than a decade ago with these outcomes concealed and ignored by climate alarmist media.
The NOAA characterization that the year 2023 global average temperature anomaly was the “highest ever recorded global average temperature anomaly” misrepresents the global reality of widely varying average temperature anomaly results across the many disparate global climate regions (as detailed in Tables 1 & 2 above using NOAA’s extensive and readily available Global Time Series region average temperature anomaly data) which establish that 7 of NOAA’s Global climate regions did not experience a “highest ever average temperature anomaly” outcome in year 2023.
This data refutes the climate alarmist media’s grossly distorted and erroneous claims that the world experienced “Earth’s hottest year on record” (with the alarmist media’s continued flawed and erroneous deception where “highest ever average temperature anomaly” is cast as being the “hottest year on record” without evaluating any maximum temperature anomaly or absolute temperature data that is required to make a “hottest year on record” claim) with NOAA’s climate data showing over 58% of all global land regions with populations representing over 73% of the earth’s total 8 billion people did not experience the erroneously claimed “highest ever recorded average temperature anomaly” or “hottest year on record” in year 2023.
Additionally, these global climate data assessments fail completely to address known impacts of increasing population density growth over time that cause UHI increasing global regional temperature impacts that are not related to exaggerated CO2 claimed “global warming” with these CO2 claims built upon decades of flawed computer model hype.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
_____________________________________________________________
Bingo! If you want to measure how hot it got, you have to measure how hot it got. Averages are misleading. After all:
“Beware of averages. The average person has
one breast and one testicle.” Dixie Lee Ray
Dixie Lee Ray was wrong!
.
.
.
The average person has less that one breast and one testicle.”
& less than 2 eyes & less than 4 limbs
fewer
fewer
Way ahead of her time, considering all the transpeople fighting for attention and special treatment.
So what was the hottest year, if not 2023?
It’s not clear how you think any year could ever qualify. Is it that all regions have to simultaneously achieve a max? That is never going to happen.
“Is it that all regions have to simultaneously achieve a max? That is never going to happen.”
Exactly, Nick. I’m pleased that you’re recognizing the impracticality of averaging noise together, especially when considering the entire globe, and expecting a meaningful result/
Averaging to discern what lies below noise is a major scientific and engineering activity.
Can you discern a point being made by Larry’s diatribe that you could explain briefly?
We understand that a temperature measurement at a specific time, at a particular station, on a given day, is influenced by multiple variables, each with distinct impacts and interactive dynamics that are not fully understood. Can you effectively average two separate temperature measurements with such complexity? It seems challenging to overcome the noise in the first place.
Yes. Almost all temperature observations today are actually averaged at their core. For example, ASOS stations average 30 instantaneous temperature measurements internally to produce the a single 5-minute METAR.
You continue to miss the main point… again.
All the 5-minute average approach does is offer a more enhanced temporal resolution. That’s right – enhances, not reduces. Hz is a unit of frequency. 0.1Hz implies that, on average, there are 0.1 data points, whereas 0.003Hz corresponds to 0.03 cycles per second – a coarser temporal resolution.
That is absurd. Values taken every 10 s have 30x the temporal resolution of an average reported every 300 s. Literally…10 s < 300 s.
None of which has anything to do with the accuracy of the measurement. You continue to confuse precision with accuracy! Why is that? How many times must it be explained to you before you understand it?
Exactly. Whether the approach reduces or improves temporal resolution is beside the main point.
It was literally your main point. And no, it does not improve temporal resolution. It reduces it.
No, my main point was:
“We understand that a temperature measurement at a specific time, at a particular station, on a given day, is influenced by multiple variables, each with distinct impacts and interactive dynamics that are not fully understood. Can you effectively average two separate temperature measurements with such complexity? It seems challenging to overcome the noise in the first place.”
What I’m conveying here is that 5-minute averaging does not alleviate that problem. No averaging interval, no matter how granular or wide, will resolve the issue.
All temperature values involve some kind of averaging. Yet no one else seems to have a problem with the concept of a temperature.
What makes you think all temperature values involve some kind of average? What do you think a thermometer reads in a calibration lab cold bath? A hot bath? You wait until the unit under test has reached equilibrium with the bath and read that temperature. That is *NOT* an average of any kind.
It’s not even obvious that *YOU* understand what temperature is. It is a factor in the functional relationship determining enthalpy, i.e. heat content. Two different substances can have the same temperature and vastly different enthalpy.
Tell us exactly what good a metric is that can have the same value for different heat content, i.e. climate – e.g. a median value of the daily temperature curve for Las Vegas and Miami. Does that median value tell you *anything* about the different climates at each location? If it can’t tell you difference between those two climates then how can it tell you *anything* about the global climate?
Why can’t you separate the measurement from the temperature?
I don’t know that I can’t. But regardless of whether I could or couldn’t it doesn’t change the fact that measuring temperature involves averaging often in multiple different ways. Even the concept of a temperature itself is based on average kinetic energy.
Kinetic energy is an extrinsic property. You *can* average extrinsic properties.
Temperature is an intrinsic property. You can *NOT* average intrinsic properties.
Think of clouds floating by on a warm summer day. Being shades *does* impact the readings of even the newest, most modern measurement stations.
This one factor alone introduces measurement uncertainty in every piece of data collected from that station.
Then consider the humidity near a large body of water where you have fluctuating wind, both in speed and direction, over time. That causes changes in humidity which, in turn, impacts the temperature measured by the station over any interval of time. This factor alone adds measurement uncertainty in every piece of data collected from that station.
Think of two measuring stations, one on the east side of a mountain and one on the west side. The temperature on the east side will be impacted by the sun earlier than the one on the west side. The one on the west side will be impacted by the sun later than the one on the east side. Thus the temperature curves will be offset in both time and quantity. When you introduce the two into an overall average this factor alone will introduce measurement uncertainty into the data collected from the stations.
I could go on and on and on.
bdgwx will not understand *any* of this. It’s part of what explains the differences in temperatures seen in the attached graphic captured at 2:20pm on 1/17. These temps are *NOT* measurements of the same measurand. Their average will *NOT* give an accurate mean value for the area – a “true” value as bdgwx calls it. You can sample the measurements from as many NE Kansas stations as you want, all you will accomplish is lowering the standard error of the mean, i.e. how precisely you have located the mean of the temperature data set. That is *NOT* the accuracy of any of the individual measurements and it is *NOT* the accuracy of the calculated mean. How accurate that mean is depends totally on how accurate the individual temperatures are.
Bdgwx appears to perceive the weather as one-dimensional and static. This could explain his endorsement of adjustments, and consequently, his belief in fraudulent surface temperature records from Berkeley, GISS, NOAA, etc.
On the contrary weather is a 4 dimensional phenomenon that is dynamic. That’s how I perceive it anyway.
My endorsement of adjustments has nothing to do with the behavior of weather. It has everything to do with discerning the truth and on a more personal level doing the right thing. Addressing errors and mistakes is a step towards satisfying both. To not address errors and mistakes would be unethical at best and fraudulent at worst.
Weather is FAR more than a 4-dimensional phenomenon!
I could probably list out more but this is enough. Climate science ignores everything but time.
“Addressing errors and mistakes is a step towards satisfying both.”
If you can’t measure the adjustment needed then GUESSING at one does nothing but add more uncertainty to the value being adjusted. No one measures the adjustment needed in field temperature measuring stations. Adjusting the readings therefore just makes the data have more uncertainty.
It’s why Hubbard and Lin said any adjustments have to be done on a station-by-station basis and needs to be *measured* in order to determine the impact that microclimate has on the station. They did their measuring at the experimental site using a freshly calibrated instrument.
How does terrain differ from geology?
geography – coastal vs inland, forest vs grassland, artic vs equatorial
terrain – mountainous vs plains, river valley vs high plateau, east/west side of a mountain
He even believes that temperatures at different locations are correlated, i.e. one affects the other, a casual relationship.
Temperatures are correlated to a confounding variable, THE SUN’S PATH, not to each other. It’s like saying the divorce rate in Maine is correlated to the per capita consumption of butter. It’s a spurious correlation.
It’s why the temperature profile on the west side of a mountain can be negatively correlated to one on the east side of the mountain. They aren’t correlated to each other, they are correlated to the path of the sun and its local insolation value. The west side temp can still be going down while the east side is warming. And vice versa.
Temperature and weather are a multi-factor, multi-dimensional functional relationship.
T ∝ f(pressure, humidity, geography, terrain, elevation, latitude, …..)
Climate science, driven by non-science statisticians, ignore all of these factors. To them the weather and temp in Colorado Springs drives the temperature on top of Pikes Peak. They don’t even understand that temp is an intrinsic property and can’t be averaged!
Precision of the individual measurement has no effect on the accuracy of an average. If one takes TMIN and TMAX measurements with an instrument precise to four decimal points, and hourly measurements with the same instrument, the average of the 24 measurements will be more accurate than the average of two measurements.
“ the average of the 24 measurements will be more accurate than the average of two measurements.”
No, the average will be more PRECISE, not more accurate. Precision is not accuracy.
“Accuracy and precision are two measures of observational error. Accuracy is how close a given set of measurements are to their true value, while precision is how close the measurements are to each other.”
An average temperature derived from a high and low for a 24-hr period is not as good an estimate of the true average temperature for the day as would be 24 measurements, one taken every hour.
Do you have even a GLIMMER of understanding as to why this is done?
Do you have even a GLIMMER of who the main user of METAR data is?
Averaging loses the variation of the measurements. I used the TMAX data from one USCRN station to determine the least-squares trend for all the individual months across its operational life — for all the Januarys between 2007 and 2023, all the Februarys, etc.
Turned out that five of the months of the year had negative trends, and seven had positive trends. When they were all averaged , the result was positive. But isn’t it significant that February, March, April, October, and November had negative trends, while the rest of the months had positive values?
Of course it is significant. It’s an indication that CO2 is not *the* driver of temperature.
The first thing I would look at is humidity, the amount of water vapor in the air. Humidity is typically lower in cooler months, meaning more heat will reach space in cooler months since even during the day the earth is radiating away heat. I’m a little surprised that January didn’t also show a negative trend.
This is why climate science NEEDS to convert to using enthalpy instead of temperature. Enthalpy is the correct measure of heat, not temperature. Climate science has had the information needed to use enthalpy for 40 years but they absolutely refuse to start using it. Ask yourself why.
Or, if they insist on temperature, it should not be this fraudulent “global average anomaly”. Why not issue an “index” of the individual stations’ anomalies, so that it could be said so many stations had a cooler anomaly, so many were higher, and so many were unchanged.
It would give a better idea of what the planet was doing that this simple average.
I’ve long advocated for this. Because of measurement uncertainty it’s impossible to know a “true trend” for most stations. The best you can do is hope that you can tell positive trends from negative trends and from no-change.
Assign each station a +, -, or 0. Total the amount of pluses, negatives, and zero’s. You’ll actually get more useful info from this than from the hokey “GAT” anomaly.
“Can you discern a point being made by Larry’s diatribe that you could explain briefly?”
-“These “hottest year on record” claims are based on misrepresenting the year 2023 obscure “global average temperature anomaly” outcome that is not applicable to any specific location or region on earth.”
Mr. Hogle: You discerned it quite well, so well it makes one wonder-why Mr. Stokes didn’t.
Holocene average was FAR higher than current temperatures.
Averaging is the most simplistic engineering activity, so about your level.
Sorry if you need someone to explain averages to you , Nickie….
Do you have a 5-year-old neighbour who could help you ??
“Holocene average was FAR higher than current temperatures.”
There may have been six month periods from 5000 to 9000 years ago that were warmer than the last half of 2023
So what
Your “far higher” claim is BS
If you average all the available local climate reconstructions in that period, and create a fake global average, it could be as little as +1 degree C. warmer than 2023. Or not. No accurate global average data exist to prove any precise average temperature about that optimum.
What about the Roman and Medieval warm periods? How come grapes cannot be grown in Iceland today? How come crops cannot be grown on the southern coast of Greenland? How come citrus fruits cannot be grown in the UK today? All of that was not only possible but done (and not just one summer) during those historical periods.
Lee, regional average temperatures are different than global average temperatures. It is even possible for a region to cool (like the North Atlantic) while the globe warms.
When you average local proxies the variations are reduced to the point that those periods are claimed to be about +0.5 degrees C. warmer than the past 10 years.
The accuracy of the proxy estimates can not confirm that. they were warmer than 2023.
There were warmer and cooler centuries but no six month period is likely to have been warmer than the last half of 2023 in the past 5000 years/
We will never know for sure
And who cares anyway?
Yore not embarrassed about you nonsensical dribblings?
Richard Greene,
Can you please provide evidence for:
“There may have been six month periods from 5000 to 9000 years ago that were warmer than the last half of 2023.”
I recall reading that 5,000 to 9,000 years ago, the vegetation line was much higher in the mountains in the Northern Hemisphere and in the Arctic Circle.
Provide evidence? You have to be kidding. Seat of the pants comments have are not based on evidence.
What was happening over the rest of the planet during this time?
I’ve heard that good lawyers never ask a question they don’t know the answer to. Maybe you, or NASA GISS, can fill Walter and the rest of us in.
The vegetation line was higher in the Andes, Himalayas, the Mongolian plateau was practically ice free. These have all been established. If you want to dismiss these examples from all over the world for the same period as ‘regional’ anomalies then that is entirely your business, but where do you stop and realise you are in an untenable position?
Was the vegetation line higher in the Andes at the exact same time it was higher in whatever mountains Walter is talking about?
What was happening over the rest of the planet during this time?
Exact.. You mean over the whole 9000-5000 years ago ?
Or are you moronic enough to think tree line change overnight.
Masses of evidence that the whole planet was warmer during that period has been posted for many years.
You have obviously blocked it out of your little mind.
It never even entered is mind nor that of Richard Greene. These are the kinds of minds which have brought us the ”horror of future global warming”
Mr. letter-salad-for-name:
Thank you for proving my comment above. IF it’s out there, you are willfully blind to it.
”What was happening over the rest of the planet during this time.”
Really??? do you have to be that obtuse? A major climatic event just happening in one place?… spare me.
Yes. Really. The global average temperature is in reference to the whole Earth. It’s not just the Andes, Arctic, Himalayas, or even the Northern Hemisphere. It includes all of the other areas…all 510e12 m2 of the planet.
Yes. Events can happen in just one place.
Did dinosaurs live at different times on each continent?
Trees growing 100 kilometers further north in the arctic for multiple centuries as an isolated event?
Maybe. Maybe not. I will say there is evidence to support the hypothesis that the NH and SH often see-saw in regards to temperature.
But all of that is moot since the question was not whether trees in the Arctic resulted from isolated event. The question was whether isolated events were possible. The answer to that question is a profound yes. I point to the cooling of the North Atlantic simultaneous with the warming of the globe in the present era as an example.
EVERYWHERE. There references from all around the globe.
Only those determined to REMAIN IGNORANT so as to keep their little AGW scam intact, ignore that fact.
Please define ‘exact’ – on a geological timescale they were but on a stopwatch that would be unlikely. Given the uncertainties and the fact that the Southern Hemisphere is out of phase with the Northern Hemisphere then yes they happened at the same time.
What do you mean when you say the Southern Hemisphere is out of phase with the Northern Hemisphere?
Can you post links to temperature reconstructions showing that when a specific region was warm/cool that it necessarily followed that the globe was also warm/cool?
How do you reconcile the situation we observe today where the North Atlantic experienced cooling simultaneous with Artic experiencing warming?
I’ll say it really, really slowly so that you might, possibly, just get it. The Northern Hemisphere is out of phase with the Southern Hemisphere; the Northern Hemisphere has its summer when the Southern Hemisphere has its winter, this has a knock-on effect that is noticeable when comparing data from both hemispheres.
You have to spoon feed some people absolutely everything these days don’t you?
They don’t even realize that the variance of temperature data is different in each hemisphere. Yet they insist on jamming non-identical distributions together and calculate the average without considering what happens to the variance, i.e. it is the variance that is a measure of the uncertainty of the average they calculate that they ignore. Variance grows, just like measurement uncertainty.
Thank you for the response Richard. I have a follow up question.
Is your argument that because the seasons are opposite between hemispheres that changes the temperature in the NH is representative of those in the SH and as such knowledge of only one allows us to extrapolate the other and ultimately the global average?
My other two questions are still open as well.
You didn’t get the point at all, did you?
If the variance of the temps in one is different than the variance in the temps in the other (i.e. summer vs winter), then how can one be representative of the other? That’s like saying a skewed distribution is representative of a Gaussian distribution. The only similarity is that they are both distributions!
Variance differs in all seasons.
How about the dinosaurs? It took a LOT of vegetation to feed those large prey dinosaurs being fed upon by the predator dinosaurs.
Two factors are the main determining factors for the amount of vegetation – CO2 and temperature.
Do you think all the dinosaurs lived in different geological time on each of the continents in which their fossils have been found?
The lack of rational thought is breathtaking.
Pay attention.. Trees which grew there at that time have been aged at more than 300 YEARS.
A tree that grew from say 8000 ya to 7700 ya in the Andes indicates that it was warm enough for growth between 8000 ya and 7700 ya in the Andes. That doesn’t tell us anything about what was happening in the other 99% of Earths surface between 8000 ya and 7700 ya.
The fossil record tells us that! Find some place where the fossil record is different from the rest of the world due to temperature – and then tell us where that is.
I’m simply referring to what I call reading bdgwx. I’m not endorsing it. No need to bite my ankle so hard on this one.
what I recall*
Same everywhere else.
Large amounts of evidence from around the globe, of a much warmer Holocene
Don’t be a climate change DENIER like dickie-boy
Don’t DENY the science like he does.
Mr. letter-salad-for-name: Alot was happening, and there may be some evidence out there, but we know you are not looking for it.
First…I’m sorry my name offends you. It is never my intention to offend others. Please understand that while I did chose wx, which is shorthand for “weather”, I didn’t chose my initials.
Anyway, no doubt a lot happened in the past. There is plenty of evidence of it in the paleoclimate record. But I am puzzled as to how Walter knows it was warmer in the past if he does not even accept that we can know the global average temperature today.
Mr. letter-salad: So, your name is weather? Mighty bold claim. Offended? Why do you project that on me?? I made fun of you, that doesn’t indicate I took offense.
Anyway, Walter is debunking the point you try to make with GAT. Burden is on your side to show GAT means something, not on him. But he could say that evidence of paleo temps, as thin as it may be, debunks the GAT hoax, because we can compare growth lines from today to back then. You try to debunk that with “but what was temp elsewhere”, so you should recognize debunking. Your problem is, there is no evidence anywhere anytime that when trees grew above today’s tree line, it was cold the tropics. You’re not puzzled, just willfully blind.
Biology. Temperatures were determined by observing biological systems.
Walter doesn’t even accept temperatures from modern instruments so I think we’re going to have a hard time convincing him that a biological proxy from 5000-9000 years ago is going to be adequate for determining the global average temperature.
I can’t be certain here because I’ve never seriously studied paleoclimatology. The other commentators here have, and I think understand my criticisms of the GAT index, so maybe there’s more things of interest to learn in the future.
We know that back in the late 1990s, Mann’s tree ring series were showing a warming trend, while Briffa’s were showing a cooling trend. It doesn’t surprise me one bit.
However, if the tree line was high in the mountains and the Arctic 9,000 years ago, those are truly remarkable isolated events, as you suggest.
they understand*
its ok bd-eyes… things you say are so inane and gormless , they are not going to offend anyone./
“But I am puzzled …… blah, blah…
You are puzzled about most things…
Don’t be embarrassed. !
Just fill in the blanks, budgiewax
Walter Hogle: I recall reading that 5,000 to 9,000 years ago, the vegetation line was much higher in the mountains in the Northern Hemisphere and in the Arctic Circle.
London to a brick it was warmer than it is now. What possible situation could you conceive were it would have been cooler?
Basically THE WHOLE of that period was significantly warmer. than the somewhat tepid temperatures we are lucky enough to be experiencing now.
Dickie-boy is a climate change DENIER.
There are LOCAL proxies that show considerably warmer areas than the past 10 years during the Holocene Climate Optimum/ .
When you average the local estimates together the variations are reduced with that fake global average
The correct science is to never compare local proxies with real time measurement used for a global average statistic.
Wow. When we measure the ”global average temperature”, do we use a giant thermometer dipped into the atmosphere from space?
Or do we perhaps use LOCAL measurements?
I am seriously starting to question your state of mind. You seem to be arguing just for the sake of it. That’s called a troll isn’t it?
Greene,
You say this and then contradict yourself by arguing that no period in the past 5,000 to 9,000 years have been warmer than the latter half of 2023, or so. Can you explain your self-contradiction?
Doesn’t need to be accurate when you are talking 3 or 4 degrees vs fractions of a degree.
Wake up, bozo !!
No evidence the global average temperature was 3 to 4 degrees C. warmer
The AVERAGE was probably warmer.
That’s all the LOCAL data reveal/
Plenty of evidence.
You just choose IGNORE it.. because you like being ignorant.
Mr. Greene: I agree with you that “global avg temp” is bull, 9000 yrs ago or 90 seconds ago. But we’re fighting propaganda about today being “hottest ever” based on GAT, surely we can fight that with even slim evidence of past temps?
Mr. Greene’s reply- “Don’t call me surely!”
No surprise the most stupid comment so far is under your name.
That’s is a really big call……
Dicki-boy has made so, so, so many very stupid comments to choose from !
There is no way of overcoming YOUR ignorance , is there, dicki-boy.
Your brain-washed lukewarmerism is immovable.
So all the Holocene proxies that show it being 3 to 5C warmer than today are bogus? Did you get that from your mythical Ford engineer as well?
If the temperatures were vastly different around the globe then the fossil records would show that. The fossil records around the world are remarkably the same everywhere. When dinosaurs roamed the earth they roamed THE EARTH, not just a few places that were warmer than everyplace else!
Simples.
Temperature is an intrinsic property.
It is not valid to average intrinsic properties.
Thermodynamics 101.
I believe the correct formulation is that temperature is an intensive variable (an intensity) whereas enthalpy is an extensive variable (a quantity dependent on the amount of matter present).
You are correct to say that averaging temperatures is utter nonsense.
Exactly.
To exaggerate to make the point:
What is the meaning of the average of temperature at the tip of the flame of your BIC lighter and the temperature of a lukewarm bathtub?
Mr. Daddis: It obviously means you’re smoking in the bath again!
I think you have me confused with Hunter Biden.
No, I think he’d try to smoke the entire bathtub.
Mr. Daddis: Very good!
If I stand in a bucket of liquid nitrogen (-321 deg F) while at the same time sticking my head in a 419.6 deg F oven, the average temp is 98.6 deg F, which is identical to my internal average body temperature that my life depends on.
Really? Could you give a concrete example?
Your comment is typical of climate science – most of it makes no physical sense whatsoever.
As to Larry’s points. You NEVER, EVER do anything with the variances of the data you use. As he points out the regional anomalies, when put together, represent a data set with a huge variance. Variance is an indicator of the “uncertainty” of the average value. The larger your variance the higher the uncertainty of the average value.
If you are going to claim some kind of confluence of climate science with “science” and “engineering” then you need to follow the methodology they use. No respecting scientist or engineer would “average” multi-modal distributions and ignore the variance of the data. They would also specify statistical descriptors like quartiles, minimum values, and maximum values.
The only physically meaningful way to average the temperature over time at a particular location is to integrate the temperature/time function, in other words to calculate the area beneath the curve.
Averaging spatially-separated temperatures is an absurdity entertained only by Nick Stokes and his supporters, who are unable to understand the difference between Enthalpy and Temperature.
Somehow climate science can’t seem to understand why agricultural science makes such use of degree-days – and have moved to using the integral-based degree-day. Same for HVAC engineering. Climate science lives in the 17th century.
I suspect the reason Climate “Scientists” average temperatures in defiance of physical reality is that they obtain the results they desire. Or am I being too cynical?
Their MISUSE of statistical analysis of uncertain data gives them the answer they want.
Climate science is stuck in the 17th century using median temperatures as if they are actual an average daily temperature.
Climate science should start handing out lapel pins with the image of Teyve from “Fiddler on the Roof”. “TRADITION”!
It is a pointless and meaningless exercise carried out by pointless and meaningless charlatans who once abused themselves with hockey sticks and now have it so far up their asses that they think it’s carbon dioxide they can smell. And you know that only too well, don’t you, Mr Stokes?
It isn’t science – it is propaganda and you should be ashamed of yourself and your fellow alarmists for daring to claim otherwise.
“And you know that only too well, don’t you, Mr Stokes?”
IF Nick is as intelligent as he likes to pretend to be.. YES, HE KNOWS. !
Either way he is a total charletan. !
And there is no way he feels any shame for his actions.
MANY hotter years .. Basically all but a short period (LIA) of the last 10,000 years.
Total BS as expected from bNasty2000, the Forrest Gump of climate science
Still trying to compete with Billy Madison I see.
EVERYONE is dumber from reading your posts. !!
What part exactly?
Whatever part goes against what RG wants to believe.
Weren’t you whining about insults, just a few hours ago?
“Is it that all regions have to simultaneously achieve a max?”
But basically EVERY region of the Earth was significantly warmer than now during the MWP and certainly warmer for the several thousand years before that.
Red thumbed can’t face facts ???
So sad.
bnice,
Off topic, but you and I were mentioned in a new blog post.
https://andthentheresphysics.wordpress.com/2024/01/16/how-to-cavil-like-cranks/
Otters? What do otters have to do with it? Are they the new WWF poster-species of climate change?
They don’t like facing the REALITY of the situation, do they.
Poor little muppets.
The guy that runs that blog is a low-end con-man pretending he has a vague knowledge of science and physics.
Just like Dickie-Boy.
And the post is by the Chief Dullard ,..
… who keeps getting his face slapped with facts he cannot dispute..
Just petulant sour grapes.
Pat Frank told me over email that his real name is actually Ken Rice. He is a numerical methods physicist at the University of Edinburg. Why hide behind a pseudonym when you have such rigorous credentials?
So he plays with numbers… no wonder he comes across as a fraudster.
Dumb leftists invent the future climate
Dumb conservatives invent the past climate
You are telling fairy tales.
And you are off with the fairies…. and loving their company. !
Scientific evidence is an anathema to you, isn’t it dickie-boy !!
You cannot allow yourself to accept reality, it would hurt your brain-washed lukewarmerism too much.
Comparing real data, to the output of broken computer models. Only left wing climate scientists would do something that dumb.
The point was clear but you are willfully obtuse, Nick. The anomaly was higher in 2023, mostly because low temperatures were milder, not because peak temperatures were hotter.
That is an important distinction. It makes the environment more livable, which is another way of saying that there is NO CLIMATE EMERGENCY!
That the slightly milder average temperature has been mostly the result of an enhanced greenhouse effect is an unproven assertion, though one I am willing to entertain. There is an enhanced greenhouse effect, albeit minor. It is wholly beneficial in its limited effect.
The question for you is why you imagine that a humanly-imperceptible 0.15°C milder anomaly is justification for dismantling western civilization, collapsing modern agriculture, and allowing the death and/or continued poverty of billions of people?
You’re like a strangely obsessive person who so fears all change that he worries about the budget imbalance when his paycheck increases by fifteen dollars a week. There’s no excess cash crisis in that, just as there is NO CLIMATE EMERGENCY!
“The point was clear but you are willfully obtuse, Nick. The anomaly was higher in 2023, mostly because low temperatures were milder, not because peak temperatures were hotter.”
Maybe they were (no evidence given), but that isn’t his point. He says that “The anomaly was higher in 2023” is not true. He says your bolded claim fails. You are just describing how it got to be the hottest year.
Fractions of a degree warming out of the COLDEST period in 10,000 years
COOLER than nearly all the last 10,000 years.
Don’t continue to be a climate change DENIER , Nick. !
Sigh!
Nick, in your dotage or your blind faith you have lost your grasp of the English language. The point made was that it was not “hotter” just because the anomaly was slightly, imperceptibly higher. Hotter implies an increase in the maximum temperatures not a less cold minimum temperature.
If the bottom 20% of the population increased their incomes such that the average per capita income rose, does that mean the rich got richer?
“So what was the hottest year, if not 2023?”
Well, in the United States, the hottest year was 1934, which was 0.5C warmer than 1998, and 0.4C warmer than 2016, and 0.2C warmer than Hunga Tonga.
Here’s your reference chart, Hansen 1999 and the UAH satellite chart:
And while I’m at it, I should tell people how to recognize whether they are looking at a legitimate temperature chart or a bogus, bastardized, instrument-era Hockey Stick chart, several of which are included in this article.
Here’s how to spot a bogus, bastardized, instrument-era Hockey Stick chart: If the chart does not show the Early Twentieth Century to be as warm as today (like the U.S. chart above), then you are looking at a bogus, bastardized Hockey Stick chart created in the computers of Climate Change Alarmists Propagandists to use to try to sell the CO2-is-Dangerous Hoax.
Bogus, bastardized, instrument-era Hockey Stick charts allow the Climate Alarmists to falsely claim that we are living in the hottest times in human history and it’s all because we are producing CO2.
The facts are we are not living in the hottest times in human history. This is the BIG LIE that Climate Alarmist propagate and it is demonstrably false going by the historical, written temperature records from all around the world that show it was just as warm in the recent past as it is today, and this demonstrates that CO2 has had little or no effect on the Earth’s weather or temperatures. Certainly not enough to be detectable.
Without the Bogus, Bastardized, Instrument-era Hockey Stick chart, the Climate Alarmists would have nothing to point to as evidence that CO2 is overheating the world.
The Climate Alarmists made all this s$%^ up. And look at the misery their lies and distortions of the temperature record are causing. It’s almost to the point of destroying Western civilization with these crazy efforts to eliminate CO2.
CO2 does not need to be eliminated from our lives. There is NO evidence CO2 is anything other than a benign gas, essential for life on Earth.
Bogus, bastardized, instrument-era Hockey Stick charts are not evidence of anything other than fraud. Fraud on the whole world, committed by climate alarmist zealots. An ongoing fraud.
The 1930s US hottest year was “adjusted away”. Tony Heller caught that.
The US 48 states are only 1.5% of Earth’s total surface area so are only relevant for people who live there … although I would say only local climates are important.
You’re fond of saying that CO2 always causes warming in lab environments. That means it should cause uniform warming everywhere, including the US 48 states. It hasn’t and doesn’t.
Climate changes from a variety of caises. CO2 is one of them.
The change of the local climate is the NET effect of all global, regional and local cause of climate change
Therefore, the global average temperature can go down when CO2 is rising, (1940 to 1975), or go up (1975 to 2015) or stay about the same (2015 to mid-2023)
No matter what the average temperature is doing, or local temperatures, adding CO2 to the troposphere inhibits cooling and that makes the average temperature rise when all other climate change variables are held constant.
“CO2 is one of them.”
Making scientifically unsupportable claims yet again. !
Lukewarmer non-science. !!
“adding CO2 to the troposphere inhibits cooling “
More scientifically unsupportable BS from a brain-washed lukewarmer.
Proven that any increased absorption in the tiny thin weak CO2 band is translated to the atmospheric window.
Don’t be a scientific denier/ignoramus, all your life, dickie. !!
If CO2 inhibits cooling then it raises earth’s temperature. What they forget is that the earth radiates heat based on T^4. An increase of 1C causes an increase in heat radiation from the earth to go up by a factor of 4. Yet all we ever see from climate science is some kind of an “average” radiation to space, never an integral of the radiation based on the earth’s temperature. And the average radiation to space is always assumed to be constant – thus the warming doesn’t cause anything as a reaction.
Supposedly the increase in temperature is in Tmin but the increase is always within the measurement uncertainty interval. Therefore they can’t *know* that Tmin is actually going up. They just see it in the cloudy crystal ball the carnival fortune teller loaned them.
T is in Kelvin. An increase of 1C increases the Earth’s temperature from something like 300K to 301K. That’s an increase in radiation of about 1.5%.
I got burned on this sometime last year.
It’s *still* enough to keep Tmax from increasing. And it is *still* enough to make the theory of “trapping heat” garbage!
Think about it for a second. Before extra energy can be emitted from the planet, the planet HAS to warm up. Where does that warming come from? CO2.
It doesn’t prevent Tmax from increasing, all it does is limit how much Tmax can increase.
No, if it were CO2 then Tmax would go up as much as Tmin since CO2 doesn’t change from day to night. CO2 would inhibit daytime heat radiation to space just as much as it does nighttime radiation.
What you need to look at is the integral of the temperature curve for the whole day to estimate the total heat radiated from the earth.
‘Climate changes from a variety of caises. CO2 is one of them.’
There’s pretty consistent evidence from various ice cores that CO2 levels lag temperature changes by hundreds to thousands of years. If you have any evidence to the contrary, or more specifically, that human emissions of CO2 have had any impact on climate, I’d love to see it.
1.5% of the total surface *IS* a significant amount. Arguing that it is not representative of the global “anything” is arguing that it somehow has special significance comparted to the rest of the globe.
If 1.5% of the globe is cooling or stagnant while the other 98.5% is warming then a believable theory as to why that is happening to that 1.5% needs to be put forth.
I haven’t seen such a theory from climate science.
You don’t need a theory.
CO2 is not the only climate change variable Here is my list
The following variables are likely to influence Earth’s climate:
(variables affected by humans underlines)
1) Earth’s orbital and orientation variations (aka planetary geometry)
2) Changes in ocean circulation
Including ENSO and others
3) Solar energy and irradiance,
including clouds, surface albedo, volcanic and manmade aerosols, and effects of cosmic rays and extraterrestrial dust
4) Greenhouse gas emissions
5) Land use changes (cities growing, logging, clear cutting trees for crops, etc.)
6) Unknown causes of variations of a
complex, non-linear system
7) Unpredictable natural and
manmade catastrophes
8) Climate measurement / statistics errors
(unintentional errors or deliberate science fraud)
9) Interactions and feedbacks,
involving two or more variables.
The other day you aid the sun ad no influence on modern warming
**had no influence**. I need a new keypad!! My ss”s gggg’ss and h’s are playing up.
It has by far the best temperature station recording set up on the planet that is why despite being only 2% of the surface area it is a valid sample.
The US stations are managed by NOAA
96% claimed to be improperlt sited per this website.
I don’t trust NOAA
“Best” is a fig newton of your imagination.
It is the best, which means that the rest of the world is even worse.
In other words the ground based sensor network is simply invalid for trying to determine the temperature of the earth.
Been adjusted away over the whole planet dickie-boy.
Plenty of evidence of raw that from around the world with the 1930s/40s peak still intact.
Again.. that has been posted multipletime ..
But you are a data denying nutter.
Adding “data denying nutter”
who likes peanut butter
to my resume
bNasty2000 Rap
Insults you Utter
Mind in the Gutter
Your brain full of Clutter
I’m sure you Stutter
Take you meds,
and we hope you Recover
But if you keep this up,
I’ll tell your Mother
Poor dickie-boy
Love your trantrums….
A veritable clown feast !
The US 48 States are getting blasted by an arctic cold front at the present time. Does this mean that the cold weather is restricted to the North American continent? I think not. Weather travels around the world from west to east along the jet streams. What happens in the west is heading east.
What goes on in the hemisphere affects the entire hemisphere. It was hot in the United States in the 1930’s and it was also hot in China and India and Norway. I can post the Tmax evidence if necessary.
Im skeptical of the implications of AGW’s influence, but there is no doubt that 2023 was the hottest year this ball has seen since the thermometer was invented. And the plot presented of increasing min temperatures is what we have experienced this century, which is also what AGW predicts.
You do know that back in the 1800s the only locations with descent coverage of reliable thermometer temperature data were N. America (mostly the U.S., most of Europe (most extensive coverage on the British islands and Germany), and Australia ?
“So what was the hottest year, if not 2023?”
Well only 3000 years ago in Iceland there was a forest of trees, but now there is a glacier on top of it.
Are we hotter or colder than then?
The claim the he says he is refuting is that 2023 was globally the hottest year on record.
So now you ADMIT that it is actually MUCH COOLER than most of the last 10,000 years, including the MWP.
And that the 1930’s, 40s could easily have been warmer, especially in the NH
Lots of raw data shows that to be the case.
And of course, there is absolutely no evidence of any causation from human CO2….
… just LOTS of urban and airport warming, and adjustment based pseudo-warming.
So based on a measly 45 years of data… and driven by a very strong El Nino
Why such a manic panic about it, and all the clown-like bluster from the MSM.. !!
Trying desperately to get their marxist totalitarian control agenda in place.
That agenda that communist fools like you are supporting…
… have you asked yourself…. “WHY am I supporting this ???”
Do you realise just how mentally stupid and anti-human that is !!
If that’s true, then the claim is both meaningless and deceptive.
Much like the rest of climate science.
The real question is why do some people freak out about “hottest year ever”? Why do we even care?
We’re only talking about 2° over a half century on top of a range -20 to 35°, say, at least where I live in Southern Ontario. And knowing that the ice ages are a geologically recent phenomenon – and that the cold is a great part of the reason there’s less biodiversity compared with the Cretaceous and other later periods until the ice came.
It would take approximately 10°C of increase just to return to the normal range the earth had for roughly 200 million years.
“We’re only talking about 2° over a half century on top of a range -20 to 35°, say, at least where I live in Southern Ontario. “
That is not the range of the global average. It took only a dip of 6°C to put us into deep glaciation.
“just to return to the normal range the earth had for roughly 200 million years.”
The number is exaggerated, but the real point is that for most of that time, the world had dinosaurs, not mammals. And certainly not a population of 7 billion humans to be fed.
Do you have any notion of how much vegetation it takes to feed a brontasuraus?
Regardless, a little bit warmer is quite clearly very good for plants.
More CO2 in the air also means more plants.
What was the hottest year?
Any year during the Medieval, Roman, Egyptian or Minoan warm periods, not to mention any year during the Holocene optimum.
Regardless, the very idea that we can measure the temperature of the planet to within a few hundredths of a degree is so ludicrous that only a climate scientist could come up with it.
Which year? you haven’t answered. And of course the topic here is hottest year on record. You can’t say which year because those periods are not on the record. And of course, you don’t have any proof that they were hotter. You exaggerate, not with evidence, but because you like to dream up numbers.
Why are you obsessed with GAT? It has been explained to you many times that temperatures cannot be averaged.
That is a crank notion. Major institutions average temperatures. WUWT is full of averaged temperatures, featured even on the front page. Larry’s post here is also full of them, global, regional. If you took them all out, the post might be only slightly too long.
Another argumentative fallacy. The fact that someone does something doesn’t make it right! It’s a False Appeal to Authority!
If you think you *can* average temperatures then prove it. Show how intrinsic properties can be averaged. You’ve been given lots of references and examples showing that they can’t. You haven’t actually refuted any of them. You’ve basically just used the argumentative fallacy of Argument by Dismissal to avoid having to actually address the issue.
Go look up how to use Steam Tables sometime. Temperature is very dependent on pressure and humidity as the Steam Tables show. This makes the actual heat content at different locations DIFFERENT. How do you average something like that? It’s why when you try to average temperatures in San Diego with those in Ramona (30 miles away) you can’t find that average temperature anywhere along the path between them! This means that the temperature between the two is *NOT* a linear gradient you can find a mid-point for.
Temperature is not a usable proxy for climate. Two different climates can have the same median temperature. A metric that can’t identify what you are looking for is not a metric at all! A functional relationship should give one answer for each set of inputs, not multiple answers! Yet temperature will give multiple climates for the same input – that being the median temperature.
People are starting to realize that climate science is built upon 17th century methodology. It starts off assuming the daily temperature profile is Gaussian and the median temp has the same value as the average temp. Its a crumbling, bad foundation. And the edifice built upon that foundation will, at some point, come tumbling down.
‘Go look up how to use Steam Tables sometime.’
OMG! Long repressed memories of ChE Thermo class…!
nightmares!
Your logic is atrocious, as usual.
Absence of evidence is *NOT* evidence of absence.
If there is absence of evidence then you don’t have any proof that they were not hotter either.
The pot calling the kettle black. Typical for a CAGW advocate.
For climate alarmists, as for leftists in general, whether an argument is or isn’t valid depends on whether that argument supports their narrative or not.
All of them.
“…establishes that 50 of the 52 states…”
You surely mean 48 of the 50 States.
According to Obama there are 57 states and Dementia Joe said 54. The 54 figure probably includes DC, Puerto Rico, the US Virgin Islands, and Guam, which of course are not states. I’m not sure where Barry got his extra three.
The fascist totalitarians like to conflate sovereign states with administrative jurisdictions because they crave centralized tyranny. To them, the US states are (should be) mere subsidiary jurisdictions of the central government. Rogue ‘districts’ like Florida and Texas need to be compelled to submission.
“I’m not sure where Barry got his extra three.”
I bet Iran was one of them. Obama has a special affinity for the Mad Mullahs of Iran, for some reason. His minion, Joe Biden, has the same affinity.
I think both are determined to see that the Mad Mullahs are able to develop nuclear weapons. It doesn’t make sense to me, but that’s what their actions are leading to. And of course, that will lead to a very big war in the Middle East.
Democrats are dangerous when they have the levers of power. They do stupid things that put all of us in danger, and they do so consistently.
50 states plus the state of the nation and the state of mind.
Who was compiling the surface temperatures over the continent of Antartica (14,200,000 km2) in 1850, surface temperature records are derived via sorcery.
Well said, Chris.
Proxy data shows the Antarctic has been warmer than now for most of the last 2000 years.
…. and has actually been cooling.
“surface temperature records are derived via sorcery”
They were created/made up in a smoke-filled room at the Climategate Hotel.
Ever notice how alarmists claim that sea ice couldn’t be accurately mapped until the satellite era but they know exactly what the “global temperature” was way back to the “mid 1800s” ?
It’s all a fraud. Climate Alarmists are in charge of the temperature data, and they have misused/misrepresented it to the detriment of the rest of us.
Our politicians are now in the process of destroying Western civilization based on the lies they have been told by the Temprature Data Mannipulators about CO2 and the Earth’s temperatures.
There’s all kinds of things like that – it’s the warmest ever in human history or 100K years or some other nonsense number that the media never fact-checks – yet the people in past ages were growing grapes and other crops further north or at higher elevations with primitive technology, than we can do today with all our magic.
Any historical average should be restricted to at least the general region that was measured back then. So, if various European and American navies and militaries and so on were measuring in there respective regions then the figures should only compare with the same region today.
Adding in Africa or the polar regions into anomaly data is polluting the data with regions that have no history and more likelihood of increasing UHI.
Oh yea, they accept the limited thermometer data as the gospel but claim the in more recent times that the people reading the thermometers didn’t know what they were doing so their reading must be adjusted up!
Despite NOAA’s 2023 Global average temperature anomaly of 1.18 °C…
…unsurprisingly life just carried on as normal as if this imperceptible, miniscule change in temperature didn’t even matter.
Which it doesn’t.
Yeah, it wasn’t even close to being the hottest year ever in my neck of the woods. Nothing to see here.
Summer weather here (SE Wyoming) was stunningly moderate; in fact cooler than I recall in a decade. Lovely is the word.
Haven’t hit 100 F at my central Indiana home in nearly a decade. And this winter is the first time we’ve had subzero temps in 5 years! And I’m supposed to be unhappy with that?
Same for Kansas.
That is the real connundrum climate science faces. If CO2 was actually “trapping” heat, i.e. enthalpy going up, then maximum temps would be going up as well.
enthalpy = mass x specific heat x temperature
If enthalpy is going up then the only way maximum temps could be stagnant or going down is for the mass of the earth to be going down, for the specific heat to be going down, or a combination of the two.
I have my doubts that the mass of the earth and atmosphere is changing significantly. Same for the specific heat capacity of the oceans, the land, or the atmosphere. Leaving temperature to be going up as a driver for higher enthalpy.
If minimum temps are going up then that also means that the earth is radiating heat away at a higher rate (S-B). That higher rate occurs both during the day as well as a night. Thus it follows that the heat that wasn’t radiated away at night due to CO2 is being radiated away during the day due to higher starting temps at sunrise. Enough is radiated away during the day to keep max temps down.
None of the climate science theories seem to hold water when critically examined. It’s an edifice built upon averages that aren’t averages but median values of multi-modal distributions. When the unstated assumptions made by climate science are actually stated and examined, they turn out to be fantasies – like all measurement uncertainty is random, Gaussian, and cancels.
The claim the media made was is that it was EARTH’S hottest year, not any particular place. I counted at least three times in the article where it admitted it was the highest global temperature. So the media accurately reported that fact. What the media didn’t do was go on a fishing expedition into temperature data to try and find something to refute it.
The most common accusation against climate “skeptics” is that they cherry pick data. I would use this article as exhibit A, in the prosecution.
Cherry-picking a strong El Nino year, in a slight warming out of the coldest period in 10,000 years….
When that temperature is WELL BELOW nearly all the last 10,000 years, and probably similar to the peak of the 1930s,40s if they hadn’t mal-adjusted the data….
Is that the cherry-picking you are referring to ??
The media claim: It was the hottest years globally. Fact: It was the hottest year globally. Perhaps you could explain how that is cherry picking.
Hottest year since when?
A piddlingly short period of time.
It is a nonsense meaningless statement.
Why be ignorant of the FACT that the Earth is currently in a COOL period of the Holocene.
Are you a climate change DENIER ??
“the Earth is currently in a COOL period of the Holocene.”
Do you pull these false claims out of a hat?
What kind of hat?
A dunce cap?
Provable if you every paid any attention to the hundreds of scientific studies done around the world .
Everywhere was much warmer during the Holocene optimum.
Everywhere was warmer during the MWP.
Plenty of evidence has been posted in the past, but you have deliberately chosen to remain ignorant and in denial.
That sort of climate change DENIAL is what make you look like a rabid lukewarmer/AGW cultist.
The Holocene is almost 12000 years old
There is rough LOCAL evidence that periods from 5000 to 9000 years ago may have been warmer than the past decade
That night be 2000 years of the past 1000 years or 4000 years or zero
The only reason you claim today is a cool period of the Holocene is because you are not very bright. Or you enjoy lying. Or both.
Still totally IGNORANT of all the data that shows the Holocene was a lot warmer.
MWP was warmer than now, RWP was warmer again.
LOCAL evidence from all around the globe.
Sorry that the reality can’t be allowed into your tiny little lukewarmer brain-washed mind.
Don’t be a CLIMATE CHANGE DENYING NUTTER all your pitiful life, dickie-boy..
Reality has not been good to you.
It’s the hottest year since the bottom of the Little Ice Age.
To which I say, thank God.
“Fact: It was the hottest year globally.”
In the satellite era (1979 to the present). There is no “global temperature” measurement before the satellite era. What we have before that is deliberate fraud with regard to the temperatures.
There’s no way anyone can accurately claim it wasn’t as warm in the past as it is today.
It’s “hottest year evah! in the satellite record ONLY.
Why use yearly data? Why not monthly and/or 2 year data? The temp anomaly record is “spikey”….a record/near record year is almost always preceded and followed by a big recovery year. There will be silence this year 2024 when the anomaly is way down from last year.
How do we know it was the hottest year ever? What is the variance of the data set used to determine that global average? What is the measurement uncertainty interval for that average?
If all the thermometers have a measurement uncertainty of +/- 0.5C then how can a difference (anomaly) of even 2C be discerned? If each measurement station is a random variable then when you add them the variances add. Meaning the uncertainty would be much greater than +/- 2C.
Why does climate science never calculate the variances of their data?
We don’t. No one is saying it is. It is the hottest year in the instrumental record according to every producer that publishes global temperature data derived from instruments, whether surface or satellite..
A strong El Nino, in a tiny, short period (45 years) at the end of a period of highly beneficial warming, out of the COLDEST period in 10,000 years.
Information before that is so agenda-tainted as to be utterly meaningless.
Highly affected by urban, airport, El Ninos, and manic data adjustments.
No evidence of any human CO2 warming though.
You have already shown that conclusively.
Deaths by cold still massively outweigh deaths by heat.
Stop your incredibly petty, pathetic and anti-science carrying-on.
You sound like a low-IQ, brain-washed 10-year-old.
The Earth’s temperatures are cyclical at least since the end of the Little Ice Age in 1850. This means the temperatures warmed out of the Little Ice Age for a few decades to a peak in the 1880’s, then the temperatures cooled for a few decades to the 1910’s, then the temperatures warmed for a few decades to the 1930’s, then the temperatures cooled to the 1970’s, then the temperatures warmed to the present day. The high temperature peaks and the low temperature lows are equivalent and span a range of about 2.0C from warmest to coolest. See the U.S. chart below as an example.
What this means is that the current warming is right in line with past warmings and is no warmer than past warmings.
What has happened in the past after one of these cyclical warmings? Cooling is what happens. It has happened twice since the end of the Little Ice Age. Why shouldn’t it happen this time? Because CO2? That’s what some people say. We shall see which way the temperatures go. Does the Natural Cycle or CO2 dominate? I vote for the Natural Cycle being in control.
The U.S. chart (Hansen 1999):
It was the hottest year ever globally on average, some places were not the hottest ever and some places were hotter to a miniscule degree. You cannot claim that the globe was the hottest ever when some of it was not.Ever is a long time and the caveat “on record” just shows how meaningless that measure is.
No-one except those living in big urban areas would have noticed any warming in the last 50 or so years..
…. because there hasn’t really been any noticeable warming.
The media is claiming that it was the hottest year in the last 100,000 years.
If you are going to repeat lies, at least try to get it right.
Earth does not have a “temperature “, you utter muppet!
The highest number of a meangless average is still meaningless. Put your head in an oven and your feet in a freezer, then measure the average of the two and see how meaningful it is.
+100
That’s because temperature is an intrinsic value. Put two items, one at 0C and one at 100C, on a thermometer and what does it read?
That’s why the “average” temperature is meaningless.
The measurement uncertainty interval for that “hottest evah!” value is wider than the difference being identified. Meaning no one really knows what year was the “hottest evah!”.
Why is measurement uncertainty never mentioned in climate science? No variances, no ranges, no quartile values, no “nuthin”.
It’s because the common meme in climate science is that all measurement uncertainty is random, Gaussian, and cancels. Therefore the averages are the “true value”. No uncertainty at all.
THAT is true cherry-picking!
Tony, I submit that the entire average temperatures construct is cherry picking –
arbitrary periods
arbitrary seasonal determinations
arbitrary adjustments to everything
This is the longest least competent article here in a long tome
2023 was the warmest year in the instrument record, even if you do not trust any statistics but those from UAH
That is good news
Mainly caused by a big El Nino in the secnd half of 2023
The author seems determined to use mathematical mas-turbation to prove that reality is not true. A record warm year denier.
He fails.
He tries to divert attention to the maximum temperatures, obviously not realizing greenhouse gas warming mainly increases night TMIN temperatures, mainly in the colder months of the year and mainly in colder nations
He is also a UHI Nut, thinking that explains most of the post-1975 warming simply because one scientist made a questionable estimate. Lets not forget that oceans are 71% of Earth’s surface area and they do not have any UHI
Also, UHI does not cause climate change. Only an increase in UHI would do that, and such an increase might be quite small in a period under 10 years
2023 was a warm year, especially the second half with the El Nino. The last six months of 2023 could have been the warmest six months in 5000 years. But we do not have accurate measurements of the global average temperature before 1979.
Earth has been warning since the late 1600s. That warming trend is still in progress. Until it ends, there will be irregular warmest years in that uptrend. Especially years with strong El Ninos. We should celebrate the improving climate — the 1690s were too cold.
Celebrating the warming in the past 325 years is rational. Trying to prove it did not happen is irrational. The author is irrational.
“Also, UHI does not cause climate change.”
Only a complete ANTI-SCIENCE NUTTER like dickie-boy thinks UHI doesn’t grossly affect surface measurements !
True, surface fabrications are NOT and indictor of climate change, because that are basically meaningless.
Just like your comments.
The fact that the UAH dataset can pick up the UHI signal in temperatures at an average height of 5000 metres is hugely significant – what on earth must the readings be like at ground level if its still noticeable at that height?
I mention irrational and of course bNasty shows up to EXPLODE with yet another BURST of verbal flatulence
UHI makes any land station have a warming trend ONLY if it increases over time. That assumes the station is not moved.
I don’t trust the surface statistics but I do trust the UAH statistics. Not that I care about a global average.
Here in SE Michigan our winters have been warmer than in the 1970s with a lot less snow. It happens to be very cold this week, but nit as cold as some weeks in the late 1970s and early 1980s. We love our warming and want a lot more. Our plants love more CI2 and want a lot more.
We do have a warmer average temperature but from winter warming. Not summer warming. All hidden by a single average.
This is the chart I prefer for the global average — the absolute average temperature by year, not a scary looking anomaly chart. The UAH anomaly chart is visually deceptive, so i stopped adding it to my climate and energy blog.
The Honest Climate Science and Energy Blog: Globa warming and CO2 level since 1880 in one simple chart
UAH is affected by UHI just like the surface record is. So what do you think you are showing?
How does UHI affect the average temperature of the lower troposphere above the mid-Atlantic or Pacific oceans?
Show your workings.
UAH shows a difference of 0.07C between land and oceans.
Data is always your enema, isn’t if fungal.
The global “average” is an average of all the stations. If some of those stations are showing warmer due to UHI, then the average will be warmer as well.
Regardless, for the vast majority of that time period, there were few to no readings taken in the middle of the Atlantic and Pacific.
Tell me, do you enjoy making a fool out of yourself?
That’s not how it is done. The global average is not the average of all stations. It is the average of all cells in the grid mesh. It is an area weighted spatial average; not a station average.
Correct. The average will be warmer because UHI is a real effect. It also happens to be an anthropogenic effect. It’s not a very big effect. Dr. Spencer’s analysis shows about a 0.03 C effect globally.
Expanding the temperature of a single station and pretending it represents the entirety of a cell is even more bogus than just averaging each station.
You are the one who has proclaimed that since UHI doesn’t contaminate oceanic readings, it can’t be significant.
Have you rejected your earlier idiocy, or are you just going to keep pretending that you know what you are talking about?
Careful…that line of thinking was the impetus that got Tony Heller banned from this site.
Correct.
It’s not my research. It’s Dr. Spencer’s. I’m going to let you pick that fight with him alone.
UHI does not affect the 71% oceans and affects land warming ONLY if it increases year over year.
The UHI increase over a 10 year period for the surface grids that could include UHI increases, may be too small to influence the global average temperature over a ten year period.
All land weather stations would have to average +0.3 degrees C. INCREASED UHI over a 10 year period to increase the global average temperature by +0.1 degree C.
Argument from ignorance, yet again
And DENIAL of Urban warming effects.
You truly are a brain-washed lukewarmer, aren’t you dickie-bot.
So sad.
“affects land warming ONLY if it increases year over year.”
roflmao.
dickie boy thinks UHI is constant.
Waiting for next idiotic statement from him. !!
Again, dickie-boy shows he doesn’t understand how surface data is fabricated.
Ignorance is bliss to him .. and he is blissfully ignorant.
Richard,
What were Southeast Michigan winters like in the 1920s, 1930s, and 1940s? In my area, SLC, I looked at absolute values for many December’s in the 1930s. In December of 1939, there were afternoons where the temperature readings went above 60F.
Also, you just said you didn’t trust the near-surface statistics, yet here you are referencing GISS.
I am referencing my own experiences living in SE Michigan since 1977, in the same home since 1987 and 4 miles south from 1977 toi 1987.
Your experience…. oh such science…. !!
Do air-conditioned basements change much anyway ??
Heck , you can’t remember what you said last week !!
1977.. the very depths of the New Ice Age” scare.
COLDEST period in the US since the much warmer 1930s,40s
Bless your little brain , Billy !!
The late 1970s featured longer and harsher extreme cold episodes in the United States. It’s not really a fair reference point, Richard.
https://en.wikipedia.org/wiki/Cold_wave_of_1978#:~:text=The%20cold%20wave%20of%201978,the%20Rocky%20Mountains%2C%20except%20Maine.
It supports the narrative, therefor it is valid. By order of the central committee.
WOW, you don’t often see a post where every sentence is complete garbage.. But Dicke-boy has managed it.
Thinks UHI doesn’t increase over time.. lol.. denial of reality, yet again.
UAH shows warming only at El Nino events, because it is the atmosphere, and any urban growth effect is highly diluted hence impossible to distinguish, just like the non-warming by CO2 cannot be found in the atmospheric data.
…whereas urban warming effect is massively amplified in the surface data by infilling and homogenisation process which spread relatively small urban areas over much larger rural areas.
Yes, we know the 1970s in the US was the period of the “New Ice Age” scare.
Unadjusted US show it was the coldest period since the much warmer 1940s, 1979 being the lowest point
A great reference period to reference if you are a lukewarmer AGW apostle.
“The UAH anomaly chart is visually deceptive”
ie , you don’t like the data.. so you hid it from your mindless flock !
No deception in the UAH charts.. They show no warming apart from at El Nino events.
Still using charts with GISS mutilations… not real, not science.!
Thank you for showing how meaningless the claims of global warming are.
“The last six months of 2023 could have been the warmest six months in 5000 years.”
This is just pure speculation.
“But we do not have accurate measurements of the global average temperature before 1979.”
Exactly right. A very important point.
And there is no evidence that 2023 is hotter than periods before the beginning of the satellite era in 1979. There *is* evidence the United States was hotter in the 1930’s, than it is in 2023. It’s not a global temperature, but it’s hotter, and it is a written record.
The fact is, the United States has been in a temperature downtrend since the 1930’s. All that additional CO2 added to the atmosphere in the intervening years have made no difference in the temperatures in the United States. That should tell us we have nothing to fear from CO2.
And of course, that’s why the Temperature Data Manniplators decided to bastardize the “global” temperatures and cool the past to make it appear that we are living in the hottest times in human history. The Climategate Data Mannipulators have fooled a lot of people with the fraudulent Hockey Stick charts they produced.
They had to bastardize the temperature records, otherwise, as in the case of the United States, it would be seen that the temperatures today are no warmer than in the recent, recorded past, and if that is the case, then CO2 has no discernable effect on the temperatures, and the Climate Alarmist Data Mannipulators did not want that message getting out there, so they changed the message to one favorable to their Climate Alarmist views.
It’s all a fraud. A very expensive, very destructive fraud on humanity.
RG, as usual, completely ignores the point. The point being that UHI contamination is making the ground based network show more warming than has actually occurred.
It is this supposed warming that is being used to support the claim that more CO2 endangers life on this planet.
What a Gish-gallop of nonsense!
Globally averaged and spatially distributed, 2023 was the warmest year on record according to every global temperature data producer, surface or satellite.
Just take a look at the side-panel here; at the beloved UAH data. That’s monthly averaged, below is the UAH annual averaged data. It fully supports the NOAA statement that 2023 is the warmest year on record globally, but it’s not even mentioned in the above article!
You can see why.
Tony Heller does a great job of deconstructing such charts- showing how the data has been “adjusted”. Go watch, say, the last 100 of his videos- then come back here.
Climate Alarmists don’t like Tony Heller, because he makes them look bad. Tony shows the temperature data fraud for what it is.
for what it is: A Pack of Lies.
I don’t like Tony Heller because he is extremely dishonest. Isn’t that why Anthony Watts banned him from this very platform?
Yes. It was a combination of his technical incompetence and dishonestly that got him banned from this site.
https://rankexploits.com/musings/2014/how-not-to-calculate-temperature/#comment-130003
Tech incompetence? His resume is very long indeed- I bet far longer than yours. Try deconstructing one of his videos- do it yourself, instead of relying on what somebody else says- show you own a brain. In fact, I suggest you do a major deconstructing of Tony and write a full essay and post it here.
Tony Heller says a lot of things, and has been doing so for more than a decade. A “complete deconstruction” of his nonsense would be a career. He repeats the same falsehoods and errors over and over again, and builds new arguments on top of those falsehoods and errors, and then new arguments on top of the arguments built on the falsehoods and errors.
One of his major errors is a complete failure to understand how to aggregate geospatial data, which has been explained to him many times, and which he has roundly ignored for years. If you have a collection of points unevenly distributed across a geographic area, you cannot simply slap them into an average, as Heller does, and claim to have represented the entire region. This is even more critical if you’re presenting those averages as a time series analysis, where the lengths of the records being combined are unequal. School children can understand this concept, so it beggars belief to suppose that Heller is a big enough buffoon that he genuinely doesn’t understand. Willful deceit is the only explanation.
“He repeats the same falsehoods and errors over and over again”
Mostly what Tony Heller does is compare the raw (unmodified) recorded temperature data to the same data after it has been adjusted by dishonest climate alarmist temperature data mannipulators.
Tony overlays the raw data chart with the adjusted data chart and the fraud is evident to anyone with a brain and no agenda: The raw temperature data always shows it was just as warm in the recent past as it is today, and the adjusted temperatures always show it was not as warm in the recent past as it is today.
Tony didn’t do any adjustments to either set of data, he just shows how different they are after adjustments. They are as different as night and day. You can’t say they are not. All you can do is try to justify the temperature adjustments and changes to the original temperature readings.
I don’t buy it. I’ll take the original temperature readings (the raw data) anytime over the adjusted data. Using the raw data eliminates the climate alarmist fraud from the data and eliminates the need to reduce or control CO2.
Down below you told me the raw data wasn’t actually raw. Now you’re saying it is. Which is it? Is the raw data really raw or not?
That’s what he is purporting to be doing, but he kludges the fundamental analysis so badly that his results don’t actually say much about the underlying data. Others have created accurate comparisons between raw and adjusted datasets (e.g. here ), and while there are differences, they are mostly around the edges. In fact they go the wrong way round for the fraud you claim – the adjusted data show less historic warming than the raw data.
What Heller does is to naively slap together station records without considering changes in station density in time and space. This isn’t an “adjustment,” it’s a basic element of statistical sampling. If I have region A with 500 stations and region B with 12 stations, and I just plop all of the station records into an average, my result will be overwhelmingly weighted to region A, it won’t actually reflect the average of A and B. But that’s what Heller does.
Some of his deception is more subtle, like how he cherry picks data to show what he wants. You will notice he focuses almost exclusively on US temperature data (ask him for a comparison of raw vs adjusted global temperatures and you will hear crickets), because the US station network is extremely dense and consequently has more network-wide biases that are “adjusted.”
I’ve done a dead simple analysis of raw vs global land-only temp data myself:
https://imgur.com/TbtHeLB
The only processing in my temp estimate (black line) is to create area-weighted 5×5 gridded averages of the raw station records to account for the uneven distribution of stations noted above. No adjustments whatsoever, just a simple average of all GHCN-raw records.
Is it any better to divide a land area into a grid, then take whatever sensors fall inside a grid and use that sensor to represent the entire grid, as climate models do?
Since the planet is not completely blanketed over every square nanometer with temperature sensors, you will always face the problem of using a discrete number of point measurements as an estimate of a continuous surface field. There are good ways to do this, and bad ways. Heller picks the worst possible way.
“If I have region A with 500 stations and region B with 12 stations, and I just plop all of the station records into an average, my result will be overwhelmingly weighted to region A, it won’t actually reflect the average of A and B. But that’s what Heller does.”
OMG! How is that any different than what climate science does with infilling and homogenization? They equalize the numbers by just creating stuff out of thin air!
How is that any different than what UAH does? It doesn’t equally sample every measurement grid either!
Infilling is simply acknowledging that the grid spaces are arbitrary delimiters. If you have stations adjacent to the boundary of a grid square with no sensors in it, is it correct to say you have no information about the area inside of the empty grid cell? Probably not – the stations around the grid cell are better estimates of the values inside the grid cell than the assumption that the grid cell should take on the value of the global mean.
You can choose not to do this (HadCRUT did no infilling for many versions, only in the latest have they included infilled values for grid cells with no stations), and you can justify your choice. What Heller is doing is completely unjustifiable, and your feigned incredulity is wearisome.
“ If you have stations adjacent to the boundary of a grid square with no sensors in it, is it correct to say you have no information about the area inside of the empty grid cell?”
YES! if San Diego and Ramona are in different grid cells it would be wrong to say the temp in San Diego can be assigned to Ramona, or vice versa.
If Pikes Peak and Colorado Springs are in different grid cells it would be incorrect to assign the temp on Pikes Peak to Colorado Springs.
If you are in Meriden, KS on the north side of the Kansas River valley and in a different grid cell from Berryton, KS which is on the south side of the river valley then it would be incorrect to assign the temp in one location to the other.
“Probably not – the stations around the grid cell are better estimates of the values inside the grid cell than the assumption that the grid cell should take on the value of the global mean.”
How do you justify this? When terrain, geography, elevation, humidity, microclimate, etc all have an impact on the unknown temp all you are doing is *guessing* at what the value should be. See the attached jpg. If you didn’t have a temp measurement for Holton, KS how correct would it be to substitute the temp at Hiawatha, just a few miles away but possibly in a different grid cell. Or Kansas City temp for a missing Topeka temp?
You might as well just substitute the global mean, it would be just as much of a guess as anything else.
Do you really understand what you are saying here? You are saying the global average temp should *NOT* be used as the temperature for ANYPLACE! If that is truly the case then the global average temp is meaningless for that ANYPLACE. It describes the climate no where, let alone for the globe!
The grid cell is not a physical boundary, it is an arbitrary delimiter. You could shift all the grid cells over, make them bigger or smaller, or even use something like a triangular grid and then the station-less cell might suddenly have the adjacent station inside it. By not doing any infilling you are simply creating an unnecessary restriction that arbitrarily restricts the coverage of your dataset.
We are assuming that the temperature anomaly in Marysville represents the entire region between Marysville and Hiawatha (with overlap with the Hiawatha station). We don’t have stations covering every nanometer of land. What you’re suggesting is akin to saying that if the county between these two cities has no stations in it, we have to pretend that we know nothing about the temperature anomaly in that county, even though we are saying we know something about the anomaly all the way right to the boundaries of the cell. It’s just not really justifiable. To be fair, HadCRUT did exactly this for a long time, it doesn’t really have a huge impact on the global trend, it mostly omits parts of the Arctic. I don’t perform any infilling in the simple analysis I presented elsewhere in the thread:
https://imgur.com/TbtHeLB
And my trend is consequently most similar to CRUTEM (I used CRUTEM V4, which does not do any interpolation).
Again, you just need to clearly document what choices you’ve made in your analysis and explain why you made them.
The WUWT devout really seem to struggle with the concept of an average. As I said earlier, no single American family consists of 3.13 people, but we can use that average to track how the size of families in America are changing over time.
For the globe, no one area necessarily assumes the exact value of the global mean anomaly. Some areas are higher, some are lower (some might well be quite close by coincidence). If we had a map with a missing cell like this one:
What would you say is the best estimate of the anomaly in the grid cell? Is the best estimate 1.18? The value of the global mean anomaly? Or is it something between 2 and 4 degrees? Your method says the best estimate we can possibly make is to say it’s 1.18 degrees. I say we can do better, with infilling.
“The grid cell is not a physical boundary, it is an arbitrary delimiter. You could shift all the grid cells over, make them bigger or smaller, or even use something like a triangular grid and then the station-less cell might suddenly have the adjacent station inside it. By not doing any infilling you are simply creating an unnecessary restriction that arbitrarily restricts the coverage of your dataset.”
You might do this, you might do that, you might do something else.
Does *any* of it get done to recognize geographic, terrain, or elevation differences? Of is it just a 2-dimensional grid on a flat surface?
“We are assuming that the temperature anomaly in Marysville represents the entire region between Marysville and Hiawatha (with overlap with the Hiawatha station).”
Temperature can be a gradient depending on weather, including pressure fronts. Or it might be a step function as you cross a river valley. Again, you are making no adjustments for the different terrains, geography, or elevation.
“We don’t have stations covering every nanometer of land. What you’re suggesting is akin to saying that if the county between these two cities has no stations in it, we have to pretend that we know nothing about the temperature anomaly in that county, even though we are saying we know something about the anomaly all the way right to the boundaries of the cell. “
“To be fair, HadCRUT did exactly this for a long time, it doesn’t really have a huge impact on the global trend, it mostly omits parts of the Arctic.”
So the sampling works? If so then why are you worried about adding a few extra samples that won’t make any difference in the end product?
“The WUWT devout really seem to struggle with the concept of an average.”
No, just with an average of an intrinsic property. I suggest you get to your local university with a ChemE dept and ask about how you can average temperature. AND also a problem with no measurement uncertainty being propagated onto that calculated average from the individual measurements. Again, I suggest you get to your local university and ask an EE professor about the measurement uncertainty in the EE lab oscilloscopes. Ask them if the readings from that equipment can be averaged to find a “true value”.
“Your method says the best estimate we can possibly make is to say it’s 1.18 degrees. I say we can do better, with infilling.”
You CAN’T do any better. The measurement uncertainty of those temps will be *at least* +/- 0.5C! In reality it will certainly be in the units digit and perhaps in the tens digit! You probably can’t even tell if it is 1 deg or 2 deg!
How precisely you calculate the average tells you *NOTHING* about the accuracy of that average. If by some freak of nature every single thermometer in that grid was reading 1deg high, no amount of averaging can remove that. Your average will be 1deg high and it will be that no matter how precisely you calculate the average!
All you are doing is repeating the same meme over and over: “all measurement uncertainty is random, Gaussian, and cancels”. You don’t even know if the stated values of the measurements is a Gaussian distribution let alone the measurement uncertainties.
Heller does not recognize these features whatsoever, he explicitly ignores them and their impact on his temperature series. The irony is that you think actual scientists are ignoring these things, yet give Heller a free pass for doing so flagrantly and repeatedly, even when told that he shouldn’t numerous times. This is why I can never take the contrarian set seriously – the hypocrisy and denial is ridiculous.
You’re trying to change the subject. The point is that you can clearly tell that the anomaly in the missing square is far more likely to be related to the anomaly of all the surrounding grid cells than it is to the global mean. I think you’re trying to pivot the conversation to avoid having to admit it.
“Heller does not recognize these features whatsoever, he explicitly ignores them and their impact on his temperature series. “
So does climate science. The only valid way to recognize and weight them is to use ENTHALPY as the metric – which climate science refuses to do!
“The irony is that you think actual scientists are ignoring these things, yet give Heller a free pass for doing so flagrantly and repeatedly, even when told that he shouldn’t numerous times. This is why I can never take the contrarian set seriously – the hypocrisy and denial is ridiculous.”
He’s doing EXACTLY what climate science does today! When climate science jams SH and NH temps (or anomalies) together they are ignoring the very same thing Heller ignored.
You are doing the equivalent to the pot calling the kettle black!
“You’re trying to change the subject.”
No, I’m right on the issue. If you don’t propagate the measurement errors of the field measurement devices onto the GAT then the GAT is meaningless. If you don’t propagate the variances of the random variables used to calculate the GAT then the GAT is meaningless.
“The point is that you can clearly tell that the anomaly in the missing square is far more likely to be related to the anomaly of all the surrounding grid cells than it is to the global mean.”
So what? Not having a value for that missing square should have no impact on the global mean! If it does have such an impact then the sampling data you are using for finding the GAT is useless and it simply doesn’t matter whether some pieces of data are missing!
You and bdgwx keep trying to say that sampling a parent population distribution is not a valid way to get the statistical descriptors for the population – and then turn around and try to defend the GAT as a valid statistical descriptor!
You want your cake and to eat it also. Typical for climate science.
Interpolation is quite valid for outputs such as your map of the globe, but it is under no circumstances input data. Even as an output, it should be clearly flagged as interpolation.
Interpolation is often used for outputs (if flagged as such), but should under no circumstances be regarded as data.
Even then, it may be incorrect. There are quite a few smaller areas on that map which are a different temperature to the surrounding area.
That’s not true at all. Interpolated data is used as an input into many analysis and decision making models in all kinds of science, engineering, medical, etc. fields.
Anyway, the problem is that the there may not be enough upstream data to fill all grid cells. Either way something has to be done.
The no-effort method is to just average the filled cells. Mathematically this is equivalent to infilling the unfilled cells with the average of the filled cells. This means the interpolation a cell spans out over the entire domain.
A better method is some form of local regression, like kriging, in which the interpolation is done locally instead of globally meaning that the interpolation for a cell spans out only among the neighboring cells.
The point is that mathematically you are infilling either way whether you realized it or not. And its pretty easy to show with a monte carlo simulation that local interpolation is superior to global interpolation generally. You can also show this in the real world by doing a data denial experiment on a real grid of data like say from UAH.
So are arbitrary offsets and pseudo-random values. It doesn’t make any of them data.
They are interpolated values, not data.
Sorry, 30+ years of IT pedantry talking there. There are figures/values and there are data, and only sometimes do the twain meet.
Well, DUH! That’s almost the definition of interpolation.
I wrote regression, interpolation an extrapolation plugins for Excel back in the Windows 3 days, and we explicitly warned that the interpolated and extrapolated values weren’t data.
You are trying to say something about a continuum field (temp). You have samples. The samples are the data. But they don’t mean much unless you can make some assumption about the points you didn’t sample. ie, interpolate. Then you can integrate and average.
The key thing is to have enough samples that you can make that assumption. How do you know? Various statistical techniques testing whether different subsets give the same answer. One colorful version here
“But they don’t mean much unless you can make some assumption about the points you didn’t sample. ie, interpolate.”
Malarky! You are basically trying to say that sampling of a population isn’t proper methodology to get the statistical descriptors for that populaiton.
That means the GAT is meaningless. Is that *really* what you want to say?
In order to make assumptions about the points you didn’t sample you ALSO NEED THE VARIANCE OF THE SAMPLE YOU DID TAKE!
Why doesn’t climate science provide the variance of the data sample they have?
Yes, the samples are the data. Interpolated values aren’t.
They’re valid for visualisation, but under no circumstances are interpolated values valid input data.
If the sampling methodology is poor, that’s unfortunate. You have missing values and the uncertainty is higher.
That seems like an awfully restrictive and arbitrary rule you are placing on interpolated values especially considering that JCGM 100:2008 (the document I was told is the be-all-end-all guide to uncertainty in metrology) uses interpolated (and extrapolated) values as inputs into another measurement model. It may be ironic that an example is provided that use interpolation (and extrapolation) to adjust temperature readings.
Where does the GUM use interpolated values in a measurement model?
Again, I think you are confusing a “best estimate” +/- uncertainty as “interpolation”?
And the use of significant figures and rounding is not interpolation or extrapolation either.
If you are speaking of Section G.4, the use of interpolation there is to get a value for degree of freedom for a t-distribution.
That is *NOT* interpolating measurements. It’s not even an input to a measurement model. It’s a method of determining a statistical descriptor – and statistical descriptors are *not* measurands.
bdgwx doesn’t realize it but he just said that the entire GAT is improperly calculated because sampling of a population can’t work!
If the statistical descriptors of a distribution can’t tell you about the distribution then why do any statistical analysis at all?
bdgwx sees the average as a MEASUREMENT of a measurand. The GUM says a measurand is a physical entity. The average is a statistical descrriptor, not a measurand, it’s not a physical entity.
As you say, interpolated VALUES are not data, they are outputs. They can’t be both outputs and inputs. If they are not inputs then they aren’t *data*.
I learned something like this over 50 years ago when taking measurements along a 100mile high-voltage transmission line. We took measurements at gridded intervals along the line. From that DATA we could lay out the continuous EM field around the line. We did this so that complaints from people along the line about interference from the line could be properly evaluated to determine if further investigation was needed..
We could use the EM field we created from the DATA samples to interpolate values at a distance from the line (inverse square law, etc.). But tthose interpolated VALUES were not DATA. If the interpolated VALUES indicated a possible problem we had to go out and MEASURE the field at the location in question. Those measurements became DATA, not the interpolated VALUES.
Apart from it sounding more like extrapolation than interpolation, that’s it in a nutshell.
Data. Values. It seems more like a debate of definitions and semantics. As I’ve explained many times before I’m not as concerned with the words used to describe concepts as I with concepts themselves. Interpolated values (or whatever word you prefer) are used for downstream analysis and decision making models all of the time.
It is also important to point out that that may measurements (or “data”) use measurement models incorporating interpolation techniques unbeknownst to the user anyway. The point is we need to be careful about being too legalistic with terms here.
AAAAAARRRRGGGHHHH!
“Interpolated values (or whatever word you prefer) are used for downstream analysis and decision making models all of the time.”
So what? That still doesn’t make the interpolated values DATA! Without the DATA you have nothing to interpolate from!
“It is also important to point out that that may measurements (or “data”) use measurement models incorporating interpolation techniques unbeknownst to the user anyway.”
Malarky! V = pi * R^2 * H is a measurement model. Exactly what interpolation techniques are used that are unknown to the user? The measurements carry an uncertainty interval – that is *NOT* interpolation. It’s the “best estimate” of the value of the measurand coupled with an uncertainty interval expressing the range of values that can be reasonably assigned to the measurand.
I suspect you are now trying to conflate the use of significant figures with “interpolation”. They are not the same thing, just like interpolated values are not DATA.
You simply do not understand metrology at all, not even after all the time and effort people have expended trying to teach you the basics.
Even something like the steam tables, which are based on steps in pressure and temperature , are based on the typical measurement resolution and uncertainty with which each can be measured.
“That’s not true at all. Interpolated data is used as an input into many analysis and decision making models in all kinds of science, engineering, medical, etc. fields.”
Those “guesses” are given a measurement uncertainty interval. Type A if possible, Type B if necessary. Climate science totally ignores that when they “infill”. They just show their guesses as 100% accurate with no measurement uncertainty!
“Anyway, the problem is that the there may not be enough upstream data to fill all grid cells. Either way something has to be done.”
Why does something have to be done? You are implying that unless you have a value in all grid cells that what you calculate for an average and variance is somehow wrong. If that’s the case then you data set is garbage anyway! A few missing entries should not materially affect the statistical descriptors!
In this you are your own worst enemy. You want to defend the GAT as proper statistical analysis and then turn around and say it’s no good if a few grid cells are missing!
“Mathematically this is equivalent to infilling the unfilled cells with the average of the filled cells. “
This is what you do when you take a sample of any kind! You leave out data since you either don’t have the population data available or you don’t want to use it all. So you take representative samples and calculate their average!
“The point is that mathematically you are infilling either way whether you realized it or not.”
No, you are *NOT*. You are finding an average value of the data that you have. That average value may not exist *anywhere* in either your sample or in the population! So you aren’t “infilling” anything! The average and variance are STATISTICAL DESCRIPTORS of a distribution – they are *NOT* measurement values and therefore are not proper values to use for any specific unknown measurement element in the population!
“And its pretty easy to show with a monte carlo simulation that local interpolation is superior to global interpolation generally. You can also show this in the real world by doing a data denial experiment on a real grid of data like say from UAH.”
This is just pure malarky! It’s actually saying that doing sampling to determine statistical descriptors of a distribution simply isn’t a correct method to use!
In essence it’s saying the GAT is garbage – which is what we’ve been trying to tell you!
I think you’re reading something into this that I didn’t.
Almost by definition, interpolating proportionally between the nearest values will give a “better” result than using the overall average.
There are various caveats about noisy data sets, non-linear data sets, discontinuities, outliers, et al.
But you don’t need the “interpolated” values in order to calculate the statistical descriptors for the distribution. That does *NOT* make the missing data take on the average value.
Saying you *need* the interpolated values in order to calculate the average and standard deviation of a distribution *is* basically saying that statistical analysis using sampling is invalid since with sampling you absolutely have “missing data” in the sample. The population data that you don’t pick up in the sample most definitely does *not* take on the average value. Those “missing” population data points retain their values, you just don’t know what they are from your sample set.
Father forgive them, for they know not what they do.
The output of the analysis is never used as input to the analysis, nor or original data values overwritten and lost. The archival station data are all available from the data stewards who provided it in the first place. I’m glad we can agree that interpolation is an appropriate analytical technique.
If you are saying that the interpolated figures are not used as inputs to further steps, we totally agree.
He’s saying the output from a step is not used as the input for that step. It is however often used for a subsequent step.
For example, step X might infill grid cells using while step X+1 would average the grid. Step X+1 produces a better result when step X uses a local strategy as opposed to a global strategy. Either way the result of the interpolation is used in the next step.
gfmd
“. Step X+1 produces a better result when step X uses a local strategy as opposed to a global strategy.”
Malarky! You are trying to say that a sample size of 1000 gives you a more precise average than one of 999.
The precision you are striving for is far beyond both the sampling error *and* the measurement error attached to the average.
If someone *is* assigning the average value to an empty cell then they need to learn how to do sampling of a distribution. Why not just make all cells equal to the average value, then you wouldn’t have a distribution at all that would need to be worried about, just a constant value for everyplace!
‘If you have a collection of points unevenly distributed across a geographic area, you cannot simply slap them into an average, as Heller does, and claim to have represented the entire region.’
But you can do whatever you want with ‘anomalies’, even though the underlying data isn’t stationary? Or you can ‘infill’ records for huge numbers of stations that haven’t provided actual readings in years / decades?
Examples of willful deceit? Alarmists need only look into a mirror.
Of course not, and no one says this.
You have to do something to contend with missing data – we don’t have complete records for every station in the network. You can use anomalies, as most orgs do, or you can just infill missing values with values from nearby stations, which USHCN did for a while. Anomalies are probably preferable, but infilling is perfectly valid.
‘You have to do something to contend with missing data – we don’t have complete records for every station in the network.’
You contend with missing data by acknowledging that you have missing data – making it up, either using anomalies, ‘as most orgs do’, or by infilling missing values with values from nearby stations, ‘which USHCN did for a while’, is fraudulent.
It’s not fraudulent whatsoever, in fact implementing a process to handle missing data is an explicit acknowledgement of the missing data. You just need to document the steps you took in your analysis. Ignoring the impact of missing data as Heller does and just “rolling with it” produces incorrect results.
Always on WUWT there is a chronic inability to comprehend the difference between the archival dataset and the analyses produced using the data. Temperature indexes are analytical products. Heller is producing an analytical product (just using terrible methodology), NASA is producing an analytical product. Indeed there is no other way to produce a global or regional temperature estimate, because, as noted above, the planet is not completed blanketed with temperature sensors that all have perfectly uninterrupted records with no systematic biases or error. We are trying to represent a continuous geographical area with point-level estimates whose distribution is transient in time and space.
“You just need to document the steps you took in your analysis. Ignoring the impact of missing data as Heller does and just “rolling with it” produces incorrect results.”
Pure bullcrap! The missing data will not impact the global average at all!
If a sample of 800 temperatures give you a different mean and standard deviation than one of 1000 temperature (put in what ever sample sizes you want) then you have either
You are pushing one more climate science mean that is a myth – that long station records are needed to evaluate the global temp so infilling data has to be done.
If I pull 8000 samples off a production line instead of 10000 and I say the 8000 sample size isn’t sufficient to find the average and standard deviation of the product, quality control managers would laugh me off the premises.
Yet that is *exactly* what you are trying to defend!
I’ve seen others explain this to you before, I’m certain I’ve explained this to you before, so it is peculiar to see you pretending like you don’t know better. I think you don’t actually believe the silly things you say in these threads, I suspect you’re just playing the part of contrarian and enjoy trying to get a rise out of people. It doesn’t seem like a very fulfilling way to spend your time, playing the fool.
If I average together two time series of different lengths, each with a different mean (for temperature records, this results from, e.g., differences in altitude or latitude), I will impart a trend into the resulting series that does not reflect the behavior of either series. This is what Heller does.
To avoid this, you either have to place both series onto a common zero, or you need to ensure that both records are the same length. Note you don’t need to do both, one or the other is fine. Your claim that I am insisting on the latter is a flagrant misrepresentation of my position, and I’ll look forward to you correcting yourself on that point.
Again, as I’ve said before, ask Heller why he never presents the results of his naive averaging scheme for the globe. It’s always US-only.
“If I average together two time series of different lengths, each with a different mean (for temperature records, this results from, e.g., differences in altitude or latitude), I will impart a trend into the resulting series that does not reflect the behavior of either series. This is what Heller does.”
Do you realize what you are saying? Even if they have the *same* length your average will not reflect the behavior of either series!
That’s why combining SH temps in winter with NH temps in summer creates a multi-modal distribution. And the average describes NEITHER!
“To avoid this, you either have to place both series onto a common zero, or you need to ensure that both records are the same length.”
You are only showing that you are making unstated assumptions to make things easy. Shifting time series along the x-axis (i.e. a common zeor) doesn’t change either distribution. If they have different means they will still have different means. If you are implying you can scale one so it has the same mean as the other then you are distorting the relationship between them.
Averaging means of records of different lengths is just fine over a common interval. What are you averaging? Daily means? Weekly means? Monthly means? Annual means? What you *do* is compare them over the common interval! It makes no sense to try and average a record from 1920-1930 for one station with one from 1990-2000 for anther station. They have no commonality. Nothing you can do can change that.
Face it, your entire logic is that climate science fortune tellers can accurately guess at things they don’t actually know. You are assuming you can just substitute something else for what you don’t know – without actually knowing enough to make that a valid assumption.
And, again, you are trying to get a GLOBAL average. A few extra sample elements for one year and few less sample elements for another year does *NOT* make the average of the two unable to be compared.
In fact it will, the trend will reflect the mean of the trends of the individual series. And I’ve explained this to you and the other Gorman twin in painstaking detail in the past, with illustrated examples, so please dispense with the game of feigned ignorance.
NO! If the trend is developed from the averages of two dissimilar distributions then it is just as meaningless as the averages are. The average of a multi-modal distribution is MEANINGLESS!
Average the heights of 100 Shetland ponies and 100 quarter horses and that average is MEANINGLESS. It will tell you nothing about either distribution of heights.
You haven’t shown ANYTHING! You can’t even show what the variances of your data is! You have no idea if it is Gaussian or right/left skewed!
You, and the rest of climate science just assume that everything is random, Gaussian, and amenable to only being described by an average!
You wouldn’t last 10 minutes in any engineering group I’ve been involved in. The first time you presented a study showing an average calculated out to the thousandths digit with no associated input data variance you’d be shown out of the room!
You do know the right answer is to just not average. Temperature data is a panel data set. You have both cross-sectional data at a point in time and time series data for each station. What NASA does makes little sense to me. There isn’t any reason to calculate an “average” temperature for the earth. You lose data when you average. Cross-Seftional Time-Series analysis provides much better statistical techniques for addressing questions about climate. The unadjusted station data and adjusted data presented by NASA/NOAA, etc are very different. I have no good explanation for why the adjustment process used almost always increases the slope of the temperature trend through time.
The adjustment process actually reduces the overall trend for the global mean, see here. You believe the opposite because you are constantly being lied to by people like Tony Heller.
You do need to do some kind of spatial interpolation to move from point level measurements to a continuous field, using gridded averages serves this need quite well.
If you don’t know the variance of the data or the propagated measurement uncertainty then you don’t know the trend anyway! You simply can’t tell if the adjustments are proper or not!
“You do need to do some kind of spatial interpolation to move from point level measurements to a continuous field, using gridded averages serves this need quite well.”
Malarky! I have measured the near EM field and far EM field of an antenna to lay out a continuous filed without *any* interpolation being used as DATA. As old cocky tried to tell you the interpolation is an OUTPUT, not an input. If it’s not an input then it isn’t data!
‘You just need to document the steps you took in your analysis. Ignoring the impact of missing data as Heller does and just “rolling with it” produces incorrect results.’
Since you’ve brought up Heller, here’s one of his recent videos on data tampering for two separate temperature series, one in Africa and the other in South America. As far as I can see, he hasn’t undertaken any analyses of the data, aside from showing the raw and ‘adjusted’ temperatures. Maybe you can document the steps that NASA took in their analyses, or are they just ‘rolling with it’?
https://realclimatescience.com/2024/01/climate-seance/#gsc.tab=0
You nailed it. +100
“You have to do something to contend with missing data – we don’t have complete records for every station in the network.”
Why? You are, in essence, saying that if the global temp sample only has 800 data elements instead of 1100 that the sample is impossible to use.
If you are finding a GLOBAL average, be it absolute or anomaly, then an insignificant difference in the number of samples is meaningless. The average of 800 samples is just as good as 1000 samples.
Anomalies that are smaller than the measurement uncertainty of the data elements used to obtain the anomalies are useless in any case. They are guesses extracted from a cloudy crystal ball borrowed from a carnival fortune teller.
“Anomalies are probably preferable, but infilling is perfectly valid.”
Infilling does nothing but make a statistician “feel” better. The missing data will have no impact on the GLOBAL AVERAGE!
Again, I think you’re trolling, but to try to act in good faith I’ll offer a genuine response.
When faced with the challenge of compiling a temperature index from a transient station network, the only valid approaches are:
Heller chooses none of these. He just ignores the biases imparted by the transient nature and uneven distribution of the network and lets that influence his analysis, with no acknowledgement of it. He then builds additional analyses on top of his flawed approach, compounding the issue.
How do you combine anomalies from the SH with ones from the NH when the anomalies have different variances? Why do you continue to just ignore the fact that the anomalies inherit the variances of the absolute temperatures?
Why is it important to infill records in a data set that already has thousands of elements? It won’t affect the SEM, only the measurement uncertainty. Climate science just ignores measurement uncertainties anyway so why is it necessary to infill?
Again, in a data set with thousands of records what difference does it make if you drop some? The data loss is insignificant. If it *is* significant then your sampling methodology stinks and your statistical descriptors will also stink!
Think about it! If I have 1000 stations measuring temperature this year and a totally different 1000 stations next year exactly what difference does it make in the average of each? Why can’t you compare the two values?
You are kind of caught in a catch-22 here. It’ll be interesting to see your answer.
I think you’re trying to pivot into your arguing your weird opinions on averaging, and I’m not interested in going down that rabbit hole.
Because if you average two series and one series ends before the other, or begins after, the mean of the interval of no-overlap will assume the value of the existing series, and this can introduce spurious trends into the mean. You don’t want to do this, so either the two series need to have the same zero to begin with (anomalies), or you need two series of exactly the same length (infilling). Infilling introduces a host of complications, which is why it’s usually avoided in favor using the anomaly.
“I think you’re trying to pivot into your arguing your weird opinions on averaging, and I’m not interested in going down that rabbit hole.”
In other words you don’t have an answer to the question so you are using the argumentative fallacy of Argument by Dismissal to avoid answering.
I didn’t figure you’d have an answer. I wasn’t disappointed.
I’ll ask again: “How do you combine anomalies from the SH with ones from the NH when the anomalies have different variances?”
“the mean of the interval of no-overlap will assume the value of the existing series”
Do you know how idiotic this sounds? How do you calculate a “mean” if you don’t have the data available to calculate it? If you don’t have the data to calculate it then you present a picture with a hole in it – and you explain why the hole!
In essence, you are still arguing that sampling is ok when you want it to be ok and its not ok when you don’t want it to be ok. Having your cake and eating it too!
How are the offsets calculated if the stations weren’t all operating during a common period?
Least squares, as ued by BEST and TempLS. Here is a starting point calc. More ponderous version here.
omfg
Tony Heller makes you look like a 5-year-old when it comes to technical competence.
Oh no wait.. YOU make yourself look like a 5-year-old.
Got any proof that Tony is extremely dishonest? That’s an extreme accusation.
Yes. Anthony Watts banned him from this website, for repeated dishonesty:
AnalJ , saying someone is dishonest
Irony, satire.. whatever. !
He just doesn’t like the facts, because he has no argument against them.
No doubt Tony makes a few errors too- but mostly I think he nails it.
That’s UAH data. Roy Spencer and John Christie. are you accusing them of falsifying their record?
No, I accuse their record of being useless. The measurement uncertainty associated with the data is wider than the anomalies they calculate.
They use the same meme common in climate science – all measurement uncertainty is random, Gaussian, and cancels.
Leaving the standard deviation of the sample means, i.e. the precision of the calculation of the mean, as the uncertainty of the data.
When climate science starts showing histograms of their data and providing actual valid statistical descriptors of that data, things like min, max, quartile values, etc, then maybe we can begin to get down to brass tacks.
Winter temps have wider variances than summer temps, meaning their distributions are not iid. Yet climate science apparently has no problem just “averaging” NH temps with SH temps with no care for how the variances impact that average. And this carries through to the anomalies. If the variances of the base temps are wide then the variances of the anomalies will be wide as well.
They confuse what you’re talking about with simplistic systemic error.
All three metrology experts whose books/papers I have say you cannot identify systemic bias in measurements using statistical analysis.
Too many statisticians see the average as a “measurement of a measurand”.
The GUM contradicts this view by saying that a measurement has to do with a physical unit. An average is not a physical entity – it is a statistical descriptor. That’s why the average of a multi-modal distribution is *NOT* a measurand. It’s actually not even a valid statistical descriptor! It’s actually not a sufficient descriptor for a Gaussian distribution. You also need to know the variance! But climate science ignores variance! And none of the statisticians on here seem to have a problem with that!
“all measurement uncertainty is random, Gaussian, and cancels”
wow, so that’s what they do? must be nice to think that holds for one’s research
Here’s a thought – if it’s random, then it doesn’t follow that it will cancel. Rather than a simple coin toss the temperature uncertainty is cumulative, so may build a positive or negative bias. It does not cancel out.
If your measurements give you a TRUE Gaussian distribution then you should have as many pluses as minuses. Total cancellation should happen. The issue is that if there is *any* systematic uncertainty then *it* doesn’t cancel, it accumulates.
If you have a measurement of 20mm +/- 1mm, then that uncertainty interval of 1mm includes both random error and systematic bias. The random error could be things like a reading error or an environmental change. The systematic bias adds (or subtracts) from each measurement. Thus u_total = u_random + u_systematic.
If you don’t know the values of u_random and u_systematic then how do you determine what cancels and what doesn’t?
Climate science just assumes that it all cancels totally and the “estimated mean” is the “true value”. The term “estimated mean” doesn’t really sink in.
It’s the same thing medical science used to do. The SEM can be lowered by collecting more samples. Medical science studies would thus collect lots of samples and put forth the SEM as the uncertainty of the average the samples – regardless of the actual quality and accuracy of the individual samples.
But medical science has been forced over time and lawsuits to stop using the SEM as a measurement of accuracy. All the SEM does is measure sampling error, it does *not* determine the accuracy of the mean.
It is the measurement uncertainty that is a measure of the accuracy of an “estimated” mean. In fact, some researchers are beginning to move away from “estimated means” as a stated value and just giving the measurement uncertainty interval. No more 20C +/- 1C, just 19C-21C.
Would that climate science do the same!
Just saying that Heller deconstructs many such charts- not those necessarily but most of what the alarmists worship as God given truth.
The myth that never dies.
https://www.carbonbrief.org/explainer-how-data-adjustments-affect-global-temperature-records/
Tony Heller’s information and understanding is so egregiously wrong it was impetus that eventually got banned from this site.
I see you’re unwilling to respond to my challenge to deconstruct Heller yourself. Base you conclusion on what others have said- though many other people like Heller. Much of his understanding is based on what he’s found in old newspapers and peer reviewed publications. I doubt that you have any clue about Heller. After all, hating Tony Heller is part of climate alarmist mantra.
That FARCE from Zeke is one of the most egregious LIES ever pushed.
And you AGW idiots lap it up like it was chocolate milk.
BOTH series are TOTALLY FAKED and maladjusted.
Claiming one set of FAKERY is different from another.
And you are SO DUMB that you fall for it.
“Global Raw” (in the chart): Now, that’s funny and deceptive. That data is not raw, unmodified data. It’s already been ‘massaged”, otherwise it wouldn’t look like a bogus, bastardized Hockey Stick chart showing the Early Twentieth Century as being cooler than the present day.
You are comparing one lie with another lie with that chart.
“The data mut be fraudulent because it defies my expectations.”
Ask yourself if you are really being objective, here.
I learned a new phrase.
Although I still prefer BS
to describe climate junk science
The Gish gallop is a rhetorical technique in which a person in a debate attempts to overwhelm their opponent by providing an excessive number of arguments with no regard for the accuracy or strength of those arguments.
Yet you keep up your gormless and inane gish-gallop in every second post you make.
Not realising that the mirror sees it even if you don’t.
It quite hilarious you know..
“It fully supports the NOAA statement that 2023 is the warmest year on record globally”
How did NOAA determine the “global” temperature before the satellite era beginning in 1979?
The fact is NOAA only knows what the global temperarure is in the satellite era, 1979 to the present day. Claiming to know a global temperature before the satellite era is pure speculation. There were no global measurements of temperature before the satellite era.
So who figures out what the global temperature was way back when, before satellites were available, and how did they get these numbers?
Same way every other group that provides one does. HadCRUT, GISS, Berkeley, JMA, etc. Why don’t you try reading their methods, which are published in peer reviewed journals, and see if you can spot the flaws? No one else has so far, despite all the noise.
Why doesn’t climate science ever report on the variances of the data sets you mention?
They do.
Here’s the HadCRUT data, for one example. They report their upper and lower confidence intervals to 97.5% every month.
NOAA always report the uncertainty in their global update report.
UAH don’t though…?
Those confidence intervals are basically expanded SEM intervals. In other words they are a measure of the sampling error and are *NOT* an indicator of the accuracy of the mean.
The ACCURACY of the mean, as laid out in the GUM, is the dispersion of the values that can be reasonably assigned to the measurand. That is either the standard deviation of the stated values of the individual elements or the propagated measurement uncertainties from the individual elements.
You are still basically just parroting the climate science meme that all measurement uncertainty is random, Gaussian, and cancels.
It’s garbage from one end to the other.
HadCrud is made from surface sites a large proportion of which are provably UNFIT-FOR-PURPOSE even by their own metric. !!
But, hey.. FAKED, URBAN, MANIPULATED never-was-data… is all you have.
How do spot a “flaw” in something that a person just made up?
Don’t we say these days that that is “their truth”, and can’t be challenged?
The flaws have been well know, the fact you still don’t know indicated you not keeping up.
Hint: 1850-1900 global coverage was very sparse and incomplete in recording data by location.
The Ocean waters was barely covered at all during that time frame.
Hint: they all know that, which is why they all deal with that issue and all come up with the same results.
No, they come up with the same result because that is what they want to do.
They just MAKE-UP data…. and have admitted as such..
Phil Jones made the temperature profile up out of whole cloth and all these other people are just following his lead.
More lies about Jones.
They are ALL based on sites with massive urban/airport warming.
They are all based on the same twisted fabrications from NCAR.
Berkely use all the absolutely WORST data they can get, have no idea of the site quality and can manufacture whatever they want.. and do.
They all use the same nonsense routines that spread urban warming to rural sites and give badly affected urban sites massive weighting..
There is absolutely no possibility of any of them giving a true representation of global temperatures over time.
“Same way every other group that provides one does.”
All those groups depend on one guy’s interpretation of the temperature record, Phil Jones. And Phil Jones doesn’t want to disclose how he reached his Hockey Stick conclusions on temperature for fear someone will find something wrong with his data.
So that’s the very shaky foundation on which climate change alarmists hang their hat.
Heck, so you mean a few tree ring thermometers ain’t enough to determine global temperature? 🙂
Tree ring thermometers ain’t enough to determine the temperature of the tree itself.
Oddly enough, little or none of the Global Warming seems to turn up in actual, measured temperature series.
Considering that all the global surface records are based on actual measured temperatures, this seems like an odd take on it.
Adjusted temperatures
And it’s a huge planet. When I’m told there are 100,000,000 thermometers out there covering the ENTIRE planet, I might begin to believe it.
It doesn’t really matter how many thermometers you have, it is an intrinsic value and an average temperature is meaningless.
Enthalpy *is* an extrinsic value which is dependent on temperature, humidity, and pressure. We’ve had the capability of calculating the enthalpy for temperature measurement stations for almost 40 years – but climate science refuses to use it. (UAH is not capable of measuring enthalpy because it can’t measure either pressure or humidity.)
Think Las Vegas and Miami. They can have the same median daily temperature value while having vastly different climates. Average the median values for both and you just get back the median value – and are unable to identify the different climates for either.
Temperature is a piss poor proxy for enthalpy and climate. Yet climate science clings to it like a drowning man clings to a log. Why?
Some need to be adjusted. Look above: how many people are complaining about UHI?
Would you prefer that wasn’t adjusted for?
It’s a perfectly reasonable adjustment to make.
Likewise time-of-observation bias that dogged early US records.
Etc, etc…
So you now admit that basically ALL the warming comes from “adjustments™” and urban/airport warming
This has been known for quite a long time.
Try not to remain ignorant. !
TOBs adjustments have been shown to be an absolute farce…
Just another faked statistical joke.
“Would you prefer that wasn’t adjusted for?”
You don’t adjust for UHI by adjusting past temperature downwards, idiot !!
So not “actual measured temperatures” after all?
Adjusted.
Any other “truths” you wish to lay on us?
Not just badly and fraudulently agenda maladjusted…
… but highly affected by urban growth, airport expansion and many other local things
There is basically zero probability of them being even a remotely accurate measurement of global temperatures over time.
As with most of climate science, this graph is pretty much meaningless. The anomalies are *NOT* calculated from daily or monthly averages, they are developed from median values of base multi-modal distributions. Therefore the “averages” are meaningless. They are like the average (median) height of 100 Shetland ponies and 100 quarter-horses – exactly what significance does that average (median) have?
What is the variance of the data sets used to calculate these anomalies? Without that how do you judge the uncertainty of the anomalies?
But those charts are so nice to look at- so sciency- how could anyone doubt them? You shouldn’t ask so many questions. It’s not polite. 🙂
Any academic or burro-crat seeking funding isn’t going to be so impolite.
A piddlingly short period out of the COLDEST period in 10,000 years.
And with a very major El Nino event.
Stop your idiotic chicken-little panic.
Most of the last 10,000 years has been far warmer than now.
Urban Heat is a vastly bigger problem than anyone imagines…
[Urban Heat Island Effect – UHIE]
Picture your city, creating all the warm/hot air that it does. We know all the causes and reasons.
>>Recall the Simpsons episode where, for whatever dumbness Homer perpetrated, The Authorities saw fit to place a huuuuge glass hemispherical dome over the entire city of Springfield. The place was put into Total Isolation, they’d had enough.
Take that dome as your Heat Island or, in the vernacular, Heat Dome
In the real world it can not be that lovely neat hemispherical dome shape.
Because and by definition the UHIE will be raging at its worst under a clear blue sky and thus, by definition again, under a cyclonic (northern hemisphere) weather regime/system.
This meaning that very cold and dry and dense air is falling down onto the city, heating as it falls and hugely exacerbating the UHIE, if not being its main cause.
But descending air will squash your lovely hemispherical dome into a huge nearly flat pancake, centred on the affected city
That pancake will comprise hot dry air created by the city itself but also from the katabatically heated air that fell out of the sky from above the city.
The significant point being that it will all be excruciatingly dry air.
And dry air, no matter how hot it is, will have little inclination to ‘rise’ as your kindergarten teacher told you.
For any air mass to become buoyant, it must contain some water vapour and that is spectacularly lacking in your urban heat dome/pancake
And all the while, even if it wanted to rise, ever more dry dense air is falling down on it anyway – it has nowhere else to go just to flow outwards, horizontally and away from the city,
There’s The Question – just how far from the city and out into the surrounding countryside does that pancake of hot dry air extend?
UK wise, can anyone seriously expect the UHIE to suddenly/completely stop when the air ‘sees’ that the 30mph speed limit has lifted to 60mph – as it does when you leave the suburbs and drive out into the countryside?
really?
You know me, it gets even worse because: Once out into The Countryside, the observant motorist/driver/visitor will see that ‘most all the farmers have ever so generously ‘painted their fields black‘ – do remember, we’re under clear blue skies at this juncture.
Global sea-surface temperatures also smashed record warmest temperatures in 2023. How did urban heat influence that?
They were able to measure sea temperatures over the entire Globe at every instant in 1850?
Here’s the paper with their methods plainly set out.
Have a browse through it then submit your rebuttal to the appropriate journal.
In any case, none of this explains how urban heat is warming the oceans.
Thats not an answer to Graeme’s question.
😂
Not only that they were not global at all as their ocean temperature data are nearly nonexistent.
He is in thrall by the Climate change scam.
I think you’ll find that the Royal Navy was pretty widely dispersed in the mid-1800s.
BS !!!!
Making stupid claims like that shows you are getting more and more ignorant.
Now.. show us where all these measurements were made. !!
It’s in the paper. Get someone to read it to you.
ie, fungal has NOT READ IT !!
Now, answer the question.. and stop running around like a headless chook !
Show us where measurements were made in the period around 1850..
You must know, otherwise you wouldn’t be making such gormless and moronic statements.
Of course, the Royal Navy was using calibrated, platinum resistance thermistors in the Nineteenth Century, not crude, soft-glass mercury thermometers dipped into canvas buckets.
No, they used buckets. You didn’t read the paper, did you?
SHOW US WHERE MEASUREMENTS WERE MADE.
We are waiting !!
I read about that bucket stuff “data” years ago which I learned was poor data as the buckets and methods were highly variable thus useless.
The many problems with ship board readings.
1) The sailor had to throw the bucket overboard and then wait for it to sink to a set depth. (The sailor couldn’t see the bucket so he had to estimate how deep it was. Couldn’t use how much rope had played out because the ships motion meant the bucket didn’t sink straight down. Also the sailor had to run down the length of the ship at the same rate as the ship was moving forward in order to allow the bucket to sink.) Also if the sailor was in a hurry, or the ship’s deck was busy, there is no guarantee the bucket actually reached the proper depth.
2) It was an open top bucket, so the water in the bucket was free to mix with the surrounding water, both on the way down and the way up.
3) The thermometer was placed in the bucket for long enough to reach equilibrium. During that time, the bucket was exposed to both wind and sun. How much did that affect the temperature of the water? No way to know?
4) Originally, the samples were taken with canvas buckets, then wooden ones, then finally metal ones. Each of these has different sink rates and thermal properties. Which type of bucket was used for each measurement? That was not recorded.
5) As ships evolved over time, things like how fast the sailor had to run to keep the bucket in one spot, plus the speed of the wind over the bucket changed.
6) As ships changed from wind to coal then oil powered, the method of sampling changed from buckets to thermometers on the cooling water intake pipe. Because the location of the intake pipe is fixed, the depth at which the sample was taken as the draft of the ship changed.
All of these factors require adjustments to the data. None of these factors are known well enough to do adequate adjustments. Many of these factors are completely unkown.
Using buckets, so still Subject to cooling by evaporation. Thanks for confirming that.
Oh dearie me.. buckets are such a precise measuring device.
You really are the most moronic of idiots, aren’t you fungal !!
In other words, they were using inadequate equipment and haphazard at best methods.
But it’s all good so long as the data supports what you want to believe.
But mainly along trade routes to protect colonies and shipping. Much like our Navy today protecting shipping and trade today!
Most trade routes tried to move with the ocean currents.
For the N. Atlantic going from N. America to Europe, ships could save much time by hitching a ride on the Gulf Stream. (FOr those who don’t know, the location of the Gulf Stream is not fixed. It meanders a bit from week to week.)
The ability to find the Gulf Stream improved over time, and once satellite data became available pretty much any ship that wanted to could catch a ride.
In conclusion, the percentage of ships that were able to find the Stream is increasing over time, and the Gulf Stream is several degrees warmer than water outside the stream.
Built in warming bias, without any way to correct for it.
LOL, you are so clueless since their measurement methods are all over the place and the sampling size is trivial thus 99% of the ocean was never sampled and only right around the surface area.
Are you sure you are not an out-of-date robot?
By the way that is a 146 page PDF with gobs of charts but the data is hard to see.
It is YOU who should show us the data from 1850-1900 in which you will realize their PDF is a pile of crap as it is sparse and very low resolution.
They distributed mostly around the British colonies. However you apparently have no idea how inadequate a few hundred ships are in regards to trying to measure the temperature of the oceans.
Did you read the method they used in the linked paper?
I suspect not.
No, but you did, so I don’t need to, and increase the readership by 100%.
You’re not going to read something that contradicts your opinion – because it contradicts your opinion. Sums this place up nicely.
Why are you avoiding showing us all where measurements were made.
Is it that YOU DON’T KNOW. !!
You just accept the sparse data taken along trade routes using archaic methods, with a totally unknown error margin.
Nothing to do with “science” is it !!
If I read it, that would make me as gullible as you.
Pass!
Show us where SSTs were measured in the 1850s.
Otherwise all you have is your usual meaningless anti-science yapping and whining.
Show us where measurements were made.
We are STILL waiting. !
I’ve read the methods in your paper, and it confirms exactly my objections, namely that the Met Office has taken very old temporally-and spatially-sparse readings from uncalibrated LIG thermometers and “adjusted” them to confirm their narrative.
Just how stupid do you think we are?
Fine. Now submit your considered comments to the publishing journal and let’s all see global warming shown up for the scam it is!
Its already in the pdf you linked.
No need to publish anything,
You just don’t have the mental ability to understand the total lack of any science behind what they have done.
Try rebutting my point.
Oh, you evidently can’t.
Absolute rubbish. It does not set out any ‘methods’ at all – ‘methods’ would describe the procedures and standards set out in taking each set of data. All this paper does is describe what labels they put on other peoples data with not one scrap of information on the actual methodology or how good or reliable the data obtained by those methods are. It’s utterly useless.
Oh dear , fungal has crawled back under his rock, without showing where the data was measured.
Because he know he cannot do so.
Such an empty sac of mindless emptymess.
Yep, I challenged him to rebut my point two days ago, and he ran away, the little coward.
I’m beginning to believe that either you are being deliberately dense, or you really don’t know how averages are calculated.
The graph is annual averages, not a continuous series. It does not suggest that anyone measured SSTs at every instant.
Yes it does.
Deliberately.
I already stated it was annual and it’s clearly marked on the chart. What on earth difference does it make?
They are annual data based on daily data. Read the paper!
Do any of you ‘skeptics’ ever actually check anything for yourselves?
You’re supposed to, you know? It’s kind of implied in the word ‘skeptic’.
Show us where the measurements were made.
Or are you TO SCARED to do that ???
Er, I think AlanJ was on your side, TFN.
😂
Do you think he knew?
You are the one pushing this non-data.
And you KNOW you cannot show where it was measured.
Don’t you see how stupidly un-scientific that makes you look.
No, you probably don’t… not having even the most basic understanding of science.
MORE FAKED …
Show us how the measure whole of ocean temperatures way back in 1850.
Even Jones at CRU said southern ocean temperatures were “mostly made up”
Inadmissible as evidence of anything.
Phil Jones bastardized the instrument-era temperature data and refused to show how he got there.
All the land temperature data from all over the world, shows it was just as warm in the Early Twentieth Century as it is today, so Phil couldn’t use that data to make the past look cooler, so Phil turned to the sea surface temperatures and since we have no accurate measurements of sea surface temperatures in 1850, Phil just made the temperatures up out of thin air, and his creation just happened to cool the past and make the present look like the hottest time in human history. Just what Phil wanted to see. Phil had his “hotter and hotter and hotter” Hockey Stick profile. The better to sell the Human-caused Climate Change scam.
This fraudulent activity has tainted the whole of climate science. It presents a dire false reality that too many people believe, to the detriment of all of us, because this dire false reality causes people to do things that do all of us harm.
Phil Jones has a lot to answer for. Him and his Climategate cronies. CO2 is not the problem. The climate change lies of Phil Jones and his cronies are the problem.
Completely false, of course. Phil Jones never collected sea temperature data.
I don’t see anyone suggesting that PJ actually collected the data he was using.
“ Phil turned to the sea surface temperatures and since we have no accurate measurements of sea surface temperatures in 1850, Phil just made the temperatures up out of thin air, and his creation just happened to cool the past and make the present look like the hottest time in human history”
OK, I suppose you could say that if he made them up, he didn’t collect them. But of course, he did neither. No SST data originated from PJ.
He didn’t collect them, he used what other people collected.
Actually, no he didn’t. He was responsible for CRUTEM, the land component of HADCRUT.
“Urban Heat is a vastly bigger problem than anyone imagines…”
Last summer- I was in a Walmart parking lot waiting for my wife- the car’s thermometer said it was 90. Went home, a few miles away, in a rural area- it was 80. Perhaps the effect on the entire planet is small, but many of the “official” measurements are in urban areas.
The absolute temperature has some UHI included
But CHANGE in the absol9te temperature from UHI requires CHANGES in UHI
Not every city is growing
Detroit had its peak population in 1950 with 1,849,568 residents. As of July 1, 2022, the population has dropped to 632,464 residents, a decline of -65.8%.
Cleveland had its peak population in 1950 with 914,808 residents. As of July 1, 2022, the population has dropped to 367,991 residents, a decline of -59.77%.
Also, urban weather stations may be moved to a nearby airport. Does that increase or decrease UHI. NASA GISS thinks decrease.
And rural weather stations could be affected by UHI changes from nearby economic growth. A complicated subject.
“NASA GISS thinks decrease.”
So says citified scientists.
Very little evapotranspiration going on with the macadam/concrete/asphalt at an airport. Evapotranspiration cools the air. That’s one reason why rural temps are generally lower.
Also very few jet aircraft venting huge amounts of hot exhaust over the temperature sensor in downtown Detroit, I would expect.
Planes don’t fly over Detroit
People shoot at them
Poor dickie-boy thinks people could a plane form the ground with a hand gun
How much deeper does his ignorance run.
Only time and his comments will tell us.
But I suspect a lot, lot deeper. !… an abyss… Mariana style.
I note that GISS just “thinks” it is a decrease. No attempt to actually determine what is actually happening.
So, you are now admitting UHI is a big problem..
and that the surface data is basically worthless for gauging global temperature changes over time.
Maybe you will get there eventually…. that place called “REALITY”
Still a long way to go…. but you are showing little toddler steps occasionally
I doubt if UHI is a big problem
You are a big problem]
The surface data showed a lot of global cooling from 1940 ti 1975 but were later revised to very little cooling.
There was mo warming from 1940 to 1975 so the 2023 warm record almost certainly extends back to 1940.
“I doubt if UHI is a big problem”
Tell that to anyone living in a big city…
Poor dickie-boy
Still hanging onto his brain-washed ignorant lukewarmerisms.
Hilarious. 🙂
At least you have now admitted that the 1970s was a cold period compared to now and the 1940’s.
Another little toddler step.. Well done.
Now you just need to admit that most of the last 10,000 years was much warmer than now.
Open your brain-washed mind, and let that data in, little child
Adjusting the data to better support the climate change myth. How surprising.
Since the population is growing rapidly, then it’s a given that almost all cities are growing.
Yes there are a few Democrat run cities that are failing and losing population.
The thing is, just because the population drops does not mean that the building, roads, parking lots, etc. get pulled up and replaced with grass and trees.
Very few stations got moved to airports, for the most part, airports were added after they were built.
Beyond that, development around airports usually starts almost as soon as the airport is built. Resulting in rapid UHI contamination at the airport. Furthermore, thermometers at airports are placed to provide runway data, which is what airplanes need for their operations. They are not placed to provide climate data, which is what you want to use them for.
100%. Not sure how anyone can argue that this is not the case.
Homer Simpson- I start laughing just seeing his name- the dimwit was a nuclear reactor operator!
If we use the same metric as climate enthusiasts then Homer was a nuclear scientist!
It is obvious that Next Year will always be the hottest.
2023 will be the warmest year until the next warmest year record is set. Maybe with the next big El Nino year
Warmest year in a very short period of slight increase out of the coldest period in 10,000 years
What’s not to like.!
out of the coldest period in 10,000 years
Here comes that lie again
Consistently wrong
Like a broken watch
Still in deep climate change DENIAL.
Poor dickie-boy. !
Little dickie-boy now DENYING the Little Ice Age.
WOW. !! DENIAL is strong with this one. !
It really is amazing how quickly reality is rejected by the worshipers of the climate change myth.
Yeah, if NASA Climate and NOAA have anything to do with it.
They do, after all, have to maintain that 0.14° rise in temperatures, even if they have to put their thumb on the scales somewhere.
If for some reason it isn’t (according to their data), it won’t get reported on at all…
Larry, lots of useful data, well presented …for us,
BUT
The man in the street (or on the Clapham Omnibus), struggles with an absolute chart & really doesn’t understand what an anomaly chart is showing (which is why the catastrophists use them).
If we are to explain the accurate data to the fence sitters, it must be presented simply ( there’s no time for a lecture on stats).
An important rule in marketing is …
‘You have 10 seconds, so keep it eye-catching, relevant & simple’ !!
I read the whole thing, and I fully understand what he is doing: Ignoring evidence, cherry picking data, special pleading. The entire article is an embarrassment..
Only embarrassment is your pretence that you understand. !
Ignoring all the evidence posted.
Why is that ?
Is your ignorance deliberate?
Ask the one question that serious scientists have to answer about any study in just about any discipline; “What is the quality of the underlying data?”
We know that poor methodology, sampling bias, poorly calibrated instruments, poor representation or sample size, contaminated data and missing data all are factors which lower the quality of data. All of these factors are present in the majority of the surface datasets which lowers the quality of the data dramatically. Averaging or homogenising the data does not, can not improve the quality of the data – indeed, it does the opposite, the flaws are then propagated across the whole of the dataset, lowering the data to the least reliable or worst quality of data.
Take Michael Mann’s paleo tree ring data – even if you swallow the initial hypothesis that tree rings can be a proxy for temperature, the quality of his data was appallingly bad. His data ranged from around 5 down to 0.2 on a scale of 10, but, because he propagated the errors through the whole dataset, the whole quality of his data was lowered to the lowest quality of individual data – overall 0.2, which is fit only for the rubbish bin.
Similarly the surface temperature datasets mix very low quality data with often fairly good data but the averaging/homogenisation system lowers the whole dataset to the level of garbage. It is not fit for purpose.
The hottest year on record? Where? Faux science trying to prove a faux claim. Keep beating the folks over the head with useless data which means absolutely nothing cuz there is nothing anyone can do about the weather(climate). Except use the data as a big stick to beat the sheeple into submitting more of their wages to faux causes pretending to have a solution for a problem which does not actually exist. But, hey, it keeps lots of folks employed spreading the BS one way or another. That grant money sure is sweet. Just sayin’.
Table 1 the first 3 entries show clearly what nonsense that ‘average temperature’ is. The number in the first line is the arithmetic mean of the next two lines. This means that they give the same weight to both hemispheres, which is physically speaking utter nonsense. The North has much more landmass than the south. It means they give the same weight to an area of ocean, with its enormous heat capacity of water, as to an area of arid desert, with practically no heat capacity at all, on for instance the Tibetan plateau. Junk science.
No family in America consists of 3.13 people, but knowing the average family size, and knowing whether it is changing, is useful information to know.
It doesn’t matter whether all regions were the warmest on record this year or not – the average of all regions is still the warmest on record, and that is important information to know.
There’s heaps of information to avail ourselves of.
Most of it, like averaged temperatures, can be placed in the file labeled “Useless”.
All food contains an average of 2% insect and larvae parts, is this ‘important information to know’?
40% of women have hurled footwear at a man, on average. 4 year olds ask an average of 400 questions per day. 76.46% of all statistics are made up.
Are these examples of ‘important information to know’ because they are derived from global statistics, averaged across the whole population?
🙂
Whatever point you’re trying to make is very poorly framed. You’re either saying most statistics are useless, which we both know isn’t true, or you are saying you think there is no possible reason to want to know the statistics you’ve just cited, which is silly because whether information is going to be useful depends on context. If I am a food inspector who just discovered a peanut butter at the Jif plant consisting of 12% bug parts, the aforementioned statistic might be a very helpful comparator indeed. Please try again, and think carefully about what it is that you’re trying to say, and evaluate whether it is an intelligent thing to say or not before pressing the Post Comment button.
Statistics are a tool. They a descriptors of a distribution. As such, even introductory statistics text teach that an average by itself is useless in describing a distribution. You need to know the variance for even a normal distribution in order to understand the distribution.
In general, you simply cannot assume a normal distribution for measurements of different things. But climate science does.
If I discovered A (singular) jar at the Jif plant with 12% bug parts that is *NOT* an average and is not, therefore, a statistical descriptor. Even if you collected 1000 sample jars and checked for bug parts a 12% average would *not* describe the distribution of affected jars. You would also need to know the variance and if the distribution was Gaussian. You might just be finding the median in a multi-modal distribution.
This is the problem climate science has. Incomplete and incorrect use of the statistical tools and statistical descriptors.
Garbage in, garbage out
Whatever point you are trying to make .. it is absolute nonsense.
Garbled use of irrelevant analogies…. seriously !
Maybe, maybe not, but still WAY COOLER than most of the Holocene.
The warmer period were when human society prospered, and expanded.
The colder periods were a struggle for survival.
Be VERY GLAD for the slight warming since the LIA..
Welcome to the MODERN-DAY TEPID PERIOD
And his analysis which shows a significant effect in some locations has but only about a 0.03 C effect globally.
So still corrupted readings that are then averaged?
Average…yes. Corrupted…no. BTW…Why not post in Dr. Spencer’s blog informing him that you don’t approve of his analysis instead of insinuating it to me? Why not just post a message here on this article (and any other that speaks of temperature) informing Larry Hamlin (and any other article author) that you don’t approve of his analysis instead of insinuating it to me?
Corrupted implies deviations from the utmost precision in recorded values, indicating a measurement that has been influenced by an external source, be it small or substantial bias. Despite its skewed nature, this measurement is still factored into the daily average.
If your bar for acceptance is perfection then you are doing to dissatisfied with science in general. Which begs the question…why are you even here? Better yet…why use anything born out science at all?
What I am talking about is very far from perfection, bdgwx; doesn’t even come close.
*deviates far from perfection.
So it’s not that it deviates from perfection; it’s that it deviates far from perfection?
How much of a deviation is “far”?
How do you know the temperatures Dr. Spencer used deviated “far” from perfection?
Because the averages aren’t representative of anything in the first place! A corrupted reading skewed by an artificial influence is just *another* cherry on top.
If I may observe here that science welcomes challenge and open debate.
Climate “science” is an abject failure in this regard, as its boosters generally claim it is all “settled”.
How does your comment related to what Dr. Spencer has done?
Roy found significant warming at urban sites…
… but forgot that the homogenisation process smeared that warming through vast areas where it doesn’t belong.
Urban sites, despite representing a comparatively tiny surface area, make up a much larger proportion of the surface data fabrication.
Argumentative fallacy known as False Appeal to Authority. Walter has no responsibility for telling Spencer anything. BTW, do you think Spencer doesn’t read WUWT?
I think Spencer also reads his blog:
January 17, 2024 at 11:03 AM
roflmao.
Still pretending to yourself that surface data isn’t utterly tainted by urban, airport, site movement, infilling, data tampering and manipulation.
FACTS are not allowed into your little story, are they !!
Roy forgot that the homogenisation effects smear small local warming all over the place, so grossly underestimated the urban effect on the global fabrications.
In the UAH dataset which you yourself confirmed has an average sample height of 5000 metres. If the UHI signal can survive to be noticeable at that height, with an average cooling of 20°, diffusing as it rises, then what, exactly, would the signal be like at ground level, where it is generated?
I don’t know that the UHI signal does show up in the UAH TLT values.
If it doesn’t then where does it go? Out into the countryside?
So what dataset was Dr Spencer using which you quoted here? You say he calculated an 0.3° UHI signal but are completely unable to say whether it was a lower troposphere dataset or a surface dataset. You do realise that it makes just a slight difference, don’t you?
By the way, it was the UAH dataset that he derived the 0.3° signal from in the paper isolating UHI.
I didn’t say he calculated a 0.3 C signal. I said the global average is about 0.03 C using Dr. Spencer’s gridded UHI dataset referenced in the link Larry provided. You’ll see in that link that Dr. Spencer constructed that dataset using GHCN.
So he didn’t use the homogensation and data fabrication processes that smear that urban warming all over the globe in the surface fabrication.
Thanks for pointing that out !!
But I already told you that was the case.
All I have to say here is that the more they keep banging the “hottest ever” drum year after year, the more and more they are turning people off to the idea of CAGW. It’s the boy who cried wolf over and over again. It seems that almost every year it’s like “Hottest year ever recorded” – “OMG we are all gonna die” and “we are cooking the planet”…Eventually fewer and fewer will hear the boy’s cries of “wolf!” when no wolf ever materializes.
People see what’s around them, and they realize that life goes on as it always has. I’ve lived in central Maryland for most of my life, and since my childhood there’s been warmer winters, colder winters, some with lots of snow and some with practically no snow at all. Summer is pretty much always hot and sticky, and once in a blue moon summer temps will bump or exceed 100F in Baltimore and other heavily urbanized areas. Quite honestly the only thing I notice is that there were more snow storms in the 70’s than now. Most people likely see what I see – in other words, there isn’t much to see at all!
In a rational world, all of this temperature data would still be collected and published, but we wouldn’t have media, rent seekers and nut cases running around screaming it from the roof tops.
Then we wouldn’t have articles like this one attempting to show everyone that what the media has dubbed a ravening wolf is nothing more than a 3 pound tea cup Yorkie – if that!
👍👍👍👍👍👍👍
We had a temperature of all but 100F here in SE England a couple of years ago. It happened on a single day and was caused by the jet stream being briefly further north than usual and dragging hot air up from parts of the world that are usually warmer than we are because they’re further south.
That was a weather event and NOT climate change. It proved nothing!
Those two hot days were nevertheless used as evidence of CAGW by Alarmists.
People KNOW it is much warmer in cities as the cities grow.
Apart from that urban effect, and the constant yabbering in the media…
.. changes in temperature over the past 50 or so years would be TOTALLY UNNOTICED ..
and totally unnoticable.
This is the wrong approach to discussing climate change. The science of climate change has been completely debunked and skeptics are arguing about whether the temperature data is viewed correctly. It’s completely irrelevant once you realize the temperature changes are driven by natural forces.
“It is also shown, that the Earth-atmosphere system is in radiative equilibrium with a theoretical solar constant, and all global mean flux density components satisfy the theoretical expectations. The greenhouse effect predicted by the Arrhenius greenhouse theory is inconsistent with the existence of this radiative equilibrium. Hence, the CO2 greenhouse effect as used in the current global warming hypothesis is impossible”
https://scienceofclimatechange.org/wp-content/uploads/Miskolczi-2023-Greenhouse-Gas-Theory.pdf
This is what we need to be pushing. Climate science has the physics wrong. If humans are having an effect it has nothing to do with our CO2 emissions.
It’s the amount of radiation the earth puts out that is the proper measure, i.e the integral of the temperature curve to the fourth power. Using the average temperature, especially for the nighttime exponential decay simply doesn’t give the right answer.
Very nice, very important. This message needs wide distribution.
The moisture content of the atmosphere varies considerably from region to region and season to season. Moist air contains more heat than dry air at equal temperature. Therefore averaging temperatures would seem to be not legitimate
It’s why the same temp in Las Vegas and Miami reflect a different climate. Use temp as your metric and they both have the same climate!
Stokes,
As you very well know, old temperature records are LIG thermometers, and newer temperature records are AWS thermometers with various methods of recording. AWS thermometers respond much faster to temperature changes. All such comparisons are Apples to Oranges.