
Phys.org recently published “Global warming amplifies extreme day-to-day temperature swings, study shows,” reporting on a new study in Nature Climate Change which claims that human-caused warming is intensifying rapid day-to-day temperature fluctuations, in the process creating a new category of climate hazard. This is false. The underlying study not only relies almost entirely on notably flawed climate-model assumptions, but astonishingly also ignores well-documented, real-world factors that directly affect short-term temperature variability, such as the urban heat island effect (UHI) and temperature-station siting biases. These omissions fatally undermine the study’s conclusions.
Phys.org’s press release asserts that “extreme day-to-day temperature changes have become more frequent and intense across low- to mid-latitude regions,” with researchers’ “optimal fingerprinting” methods confirming greenhouse gases as the primary cause. It further claims this daily volatility creates “a climate roller coaster” harmful to public health. The underlying Nature Climate Change study echoes this framing, presenting global maps of amplified temperature swings but never addresses the integrity—or contamination—of the surface data employed.
This is the study’s central flaw: Urban Heat Island (UHI) contamination artificially raises nighttime minimum temperatures, causing next-day temperatures to begin from an already elevated baseline. Artificial surfaces—pavement, buildings, vehicles, heat-retaining infrastructure—release stored heat overnight, pushing lows upward and exaggerating the apparent magnitude of day-to-day variations. Yet the terms “urban heat island” or UHI do not appear even once in the Nature Climate Change publication or the press release. Nor does the paper mention temperature-station placement, metadata quality, microsite compliance with quality control, or any discussion of observational uncertainty.
These omissions are inexcusable, because, as the linked Heartland Institute report Corrupted Climate Stations: The Official U.S. Temperature Record Remains Fatally Flawed demonstrates, approximately 96% of NOAA climate stations fail to meet the National Oceanic and Atmospheric Administration’s (NOAA) own siting requirements, and are corrupted by localized heat sources such as asphalt, rooftops, HVAC exhaust, machinery, or reflective surfaces. Heartland’s report shows that compliant stations—those free of artificial heat contamination—exhibit about half of the warming trend found in the corrupted network. See the figure below. Note the difference between the blue line (the uncorrupted stations) and the orange and red lines.

These findings are backed by NOAA’s own siting standards and confirm that station bias, not climate physics, drives much of the exaggerated warming in the U.S. record, and by extension, much of the world.
If the observational foundation is distorted, the derived conclusions—such as “increasing day-to-day temperature volatility”—are also distorted.
Further, the Nature Climate Change authors use global reanalyses and model-based “optimal fingerprinting” to attribute these amplified swings to greenhouse gases. But a model in which biased data serves as a foundation will simply reinforce or exacerbate those biases. None of the physical mechanisms the authors propose—soil moisture changes, pressure variability, drought feedbacks—can be meaningfully evaluated if the input dataset contains systematic nighttime warming caused by UHI and faulty station placement.
This is precisely why the U.S. Climate Reference Network (USCRN), which avoids artificial heat sources by design, shows smaller warming trends than the older, urban-contaminated networks. See the USCRN figure below showing maximum temperatures in the U.S. since 2005. Note the lack of increased temperature extremes (peaks) today compared to 20 years ago, or any obvious upward trend in the data.

Source: National Centers for Environmental Information (NCEI) here: https://www.ncei.noaa.gov/access/monitoring/national-temperature-index/time-series/anom-tmax/1/0
Yet the current study never references USCRN data—or any equivalent bias-free baseline.
Even their health-risk findings rely on heat-biased data. If nighttime temperatures are artificially elevated due to urbanization and poor siting, the apparent “roller-coaster swings” reflect local land-use change, not global climate physics. Any claimed health correlations with mortality data would therefore be mixing genuine weather effects with measurement artifacts.
What’s missing, in other words, is climate science’s most basic requirement: separating real climate signal from non-climatic noise.
UHI, land-use change, asphalt expansion, vegetative loss, waste heat emissions, and the explosive growth of built environments all elevate nighttime minimums and increase apparent variability. The study treats all variability as atmospheric in origin, ignoring the man-made thermal reservoirs embedded in modern cities.
A rigorous scientific analysis would have:
- Examined UHI contamination explicitly.
- Segregated rural high-quality stations from urban or microsite-compromised ones.
- Compared modeled projections to USCRN’s pristine, bias-free observational network.
- Quantified the effect of siting violations documented in my 2022 Corrupted Climate Stations study.
Instead, the authors rely on global models and “fingerprinting” techniques that simply assume the data are valid. That assumption is not supported by the evidence, and the report’s conclusions cannot be meaningfully distinguished from the biases embedded in the underlying data. This is not “a new extreme climate hazard”—it is the predictable result of modeling data with errors, then attributing those errors to greenhouse gases.
Phys.org does its readers a disservice by not fact checking the Nature study and examining other possible causes for its conclusions. Instead of a thoughtful scientific examination of the novel claims made in the study it is reporting on, Phys.org’s readers got a fawning, uncritical summary of the flawed Nature study, which exaggerates day-to-day variability based on the well-documented role of UHI and siting flaws. That’s not inquisitive journalism, that’s promotion; in reality no better than a press release.

Anthony Watts is a senior fellow for environment and climate at The Heartland Institute. Watts has been in the weather business both in front of, and behind the camera as an on-air television meteorologist since 1978, and currently does daily radio forecasts. He has created weather graphics presentation systems for television, specialized weather instrumentation, as well as co-authored peer-reviewed papers on climate issues. He operates the most viewed website in the world on climate, the award-winning website wattsupwiththat.com.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Anthony,
It is sleepless 1.11 am here, so later I will add a large number of temperature observations from 47 rural Australian stations selected as likely to have negligible UHI. Geoff S
.
When will the result of your study be available?
For an Australia temperature check, I went to:
https://www.extremeweatherwatch.com/countries/australia/average-temperature-by-year. The Tmax and Tmin data from 1901 to 2024 are displayed in long table. Here is the data for these two years:
Year—-Tmax—–Tmin——Tavg Temperatures are ° C
2024—-29.7——15.9——-22.8
1901—-28.6——14.5——-21.6
Chng—+1.1——+1.4——-+1.2
After 123 years, there has been only slight warming of the air in the country. This warming is most likely due to the reduction in air pollution which results in cleaner air and fewer clouds which results in more sunlight heating the surface and air.
In I900 the concentration of CO2 in dry air was 295 ppmv
(0.58 g CO2/cu. m. of air), and by 2024 it had increased to 424 ppmv
(0.83 g CO2/cu. m. of air). There is just too little CO2 in the air to have any effect on weather and climate.
I believe Anthony pointed out a long time ago that thermometers respond to heat from all sources to which they are exposed. Growing population means growing heat.
No surprise that thermometers might reflect this increase as temperatures rising.
When obtaining temperature data for a country at the EWW website, the locations of weather stations are not given as is done for the cities. The temperature data table for some cities use two or more weather stations data. The locations of the weather stations given.
EWW use temperature data from NOAA’s data base.
Note the temperature changes for Australia are quite small. I would like to compute average Tmax +/- average deviation of the mean to estimate natural variation. However, I don’t know how to do this.
Be sure to go to EWW and check out the data for your city. Use: …/cities/your city name in lower case letters in the URL.
Maybe I’ll be accused of being pedantic, but natural variation is calculated by looking at observed measurements. If you get what I mean.
Shown in the chart are plots of the average annual seasonal temperatures and a plot of the average annual temperature at the Furnace Creek weather station in death Valley from 1922 to 2001. Note the variations in temperatures from year to year. That is what I am interest in.
The chart was obtained from the late John L. Daly’s website:
“Still Waiting For Greenhouse” available at http://www.john-daly.com. From home page, go to end a click on the selection:
“Station Temperature Data”. On the “World Map”, click on “NA” and the page down to: U.S.A.-Pacific. Finally, scroll down and click “Death Valley”. Use the back arrow to redisplay the list of stations. Click on the back arrow again will display the “World Map”.
NB: If you click on the chart it will expand and be come clear. Click on the “X” in the circle to return to comment text. Be sure to go to Oz and click on “Adelaide” which shows no warming since 1857.
Sorry Harold and others,
Copped a dose of the flu, too crook.
I will try to compose a proper WUWT over Christmas.
It is not easy to do this, but it is worthwhile to try because of Australia’s relative abundance of “pristine” weather stations.
The difficulty is that so many effects are at play that I cannot produce a conclusion that is useful for any other country. The lesson is that you ignore noise and error and uncertainty at your peril.
Geoff S
Here is a start for Australia, with daily Tmax and Tmin temperatures, unadjusted, for 45 of the most “pristine” weather stations I could find.
This set starts in 1910, though some stations go back to 1850s. Straight from BOM site Climate Data Online.
Please let me know which additional info would help your research.
Geoff S
File is Excel spreadsheet, page 1 with daily data and page 2 with station properties.
File length is 28.7 MB, so take your time.
Geoff S
http://www.geoffstuff.com/pristinemaxmin45daily.xlsx
USCRN is identical to the full, bias adjusted network:
The station network data used by Liu, et al. are also the bias-adjusted dataset from Berkeley Earth, which already accounts for things like UHI or siting bias.
Bias adjusted. The methodology employed is disputed.
If the methodology were flawed, one would expect divergence between the full, bias-adjusted network and the pristine reference network.
The methodology is needed because the baseline of ClimDev is inadequate.
Tell us what is actually changed. Tmax, Tmin, or both. How do you adjust an average without knowing the fundamental pieces that make up the average.
For a US temperature check, I went to:
https://www.extremeweatherwatch.com/countries/united-states/average-temperature-by-year. The Tmax and Tmin data from 1901 to 2024 is displayed in a long table. Here is the data for these two dates:
Year—–Tmax—–Tmin—–Tavg Temperature is °C
2024—–16.8——-4.3——-10.5
1901—–14.9——-1.6——–8.2
Chan—-+1.9——+2.7——+2.3
Note that Tavg has increased by 2.3° C since 1901 and greater than the 1.5° C limit of the Paris Agreement. Has this increase had any effect on the weather and climate of the US? No effect as far as I can tell from watching the weather reports on the TV for the last 60 years.
Be sure go to: https://www.extremeweatherwatch.com. For your city use:
https://www.extremeweatherwatch/cities/city name. For “city name” enter the name of city in lower case letters. If the city name is two words, connect these with a hyphen. At the end of the home page there are a number of options for acquiring and displaying weather and climate data. A most useful option is:
average-temperature-by-year.
My objection continues to be (Tmax + Tmin)/2 is not the average for a sensor or a specific location.
Based on a rotating, roughly spherical planet, the Tavg used by the Climate Hypocrites is too low by at least 10%.
Adjust their T^4 by that 10% and there is no long term energy imbalance.
I did not say the methodology was flawed.
Kindly do not put words in my mouth!
I said: “The methodology employed is disputed.”
Your reply indicates you did not come here to discuss. You came here to flame war.
The people disputing the methodology are presumably claiming that it is flawed. My comment applies to those people.
Berkley.. bias adjusted.. ROFLMAO !!!
Berkely’s method of adjustment was totally BOGUS and unscientific… pure guesswork.
They could get any result they wanted.
——.
And the ClimDiv data is the same as USCRN BECAUSE it has been adjusted to be that way.
ClimDiv is homogenised to match pristine sites at the regional level..
It’s been adjusted to remove the kinds of bias Anthony says would contaminate an analysis. The fact that those adjustments bring it in line with the reference standard is a good signal that the adjustments are working as intended.
I see. Get rid of anything that does suit your purpose. Like the Australian BOM declaring all official temperatures prior to 1910 “unreliable”?
Non-climatic biases indeed do not serve the purpose of assessing long term climate change. Anthony agrees with this. Do you disagree?
There are no “climatic” biases, as climate is simply the statistics of weather observations. Of course, unless the ignorant and gullible are stupid enough to “adjust” the statistics obtained from manipulated data.
Maybe you could quote the words that Anthony “agrees” with?
It doesn’t really matter, does it? Anyone who believes that adding CO2 to air makes thermometers hotter is ignorant, gullible, or just insane.
Do you disagree?
So there’s no urban heat island effect? No instrument siting changes? No time of observation bias, etc, etc?
You do realise that climate is the statistics of weather observations? Weather observations. As recorded.
Dimwitted “climate scientists” adjust, interpolate, create, manipulate – to try to get the reality to “agree” with the models.
These are the sorts of dimwits who believe that adding CO2 to air makes thermometers hotter! Would you take any notice of such ignorant and gullible individuals?
Not always. As Anthony says, there can be non-climatic biases in time series of weather observations, arising from things like urbanization, land use changes, station moves, etc. Scientists who want to use these weather observation records to track long term climate change have to deal with these biases.
Weather observations of temperature observe the temperature of the thermometers.
Nutters think that adding CO2 to air makes thermometers hotter. What’s the point of measuring the temperature of a thermometer if CO2 in the vicinity makes it hotter?
The thing that is being tracked is the temperature change over time aggregated to regional or global scale. Climatologists generally are not interested in individual temperature readings at single stations.
Measuring the “temperature of the thermometers” matters because these temperature readings together contain information about the thing of interest – long term regional temperature change.
“As Anthony says, there can be non-climatic biases in time series of weather observations, arising from things like urbanization, land use changes, station moves, etc.”
The proper way to handle this is to add an entry in the measurement uncertainty budget, not to GUESS at a bias adjustment value.
Non-climatic influences like station moves or urbanization are not additional measurement uncertainty, they are systematic distortions of the signal. If they are left uncorrected and merely added to an uncertainty budget, the resulting temperature series remains biased and trends are wrong. Modern surface temperature datasets therefore treat these effects as structural breaks to be detected and corrected empirically using relative comparisons among stations, and only then estimate uncertainty probabilistically from remaining sampling and methodological variability.
Just the typical climatology/trendology Fake Data…
So what? They can actually make the measurements *MORE* accurate! How do you know if you don’t perform a legitimate calibration protocol on the device?
You are *still* confusing error with uncertainty! When are you going to learn the difference?
The GUM says that if you apply a correction factor that you have to add an uncertainty into the budget to account for the correction not actually being known – it’s a guess!
“Modern surface temperature datasets therefore treat these effects as structural breaks to be detected and corrected empirically using relative comparisons among stations,’
You tried to tell us earlier that homogenization was done on a station by station basis and not by referring to other stations!
You are applying laboratory calibration logic to a statistical inference problem. Surface temperature datasets are not calibrated instruments measuring a known reference; they are reconstructions of a spatially correlated climate field from a changing observation network. Non-climatic changes like station moves introduce structural breaks, not additional random uncertainty. Leaving those breaks uncorrected and inflating an uncertainty budget does not fix the bias, rather, it guarantees a wrong trend. Homogenization does not assume neighboring stations have equal temperatures or perfect accuracy; it uses their shared variability to detect inconsistencies that cannot be climate. Corrections are estimated empirically and uncertainty is then quantified probabilistically across ensembles. This is model-based inference, not guessing, and it is the only mathematically coherent way to recover trends from heterogeneous observing systems.
You avoided the question. Do you dispute that siting and other artificial external influences affect observations and the results from heated thermometers?
No. He disputes that those phenomena are “climate” biases.
I explicitly said non-climatic. Michael should read more carefully.
Mark, why do you want to know? Obviously, recording the temperatures of random thermometers is pointless anyway, because the thermodynamic environment is constantly changing.
I assume you accept reality, hence your question appears peculiar.
No honest attempt at understanding is pointless, and systematic measurement is not random. How else shall we make sense of that or any environment?
However, I do agree that attempting to contrast or average them to define climate in the way that some attempt it is absurd. Certainly, the temperature record as it stands is unsuitable for that purpose, and no single metric, no matter how rigorously obtained, ever will be.
Mark, initially, the natural philosophers believed that the measurements would prove to be of use in understanding the world about them. The temperature of a thermometer tells you how hot it is – but not why.
Trying to imply that adding CO2 to air makes thermometers hotter because thermometers are hotter is completely ridiculous.
Taking the temperature of random thermometers with no set purpose, hoping that some hidden secret will magically reveal itself is probably pointless.
Dr Spencer’s “measurements” fall into this category, demonstrating nothing at all in particular.
I said nothing about CO2. As for the rest, you are welcome to your opinion.
And I didn’t say you did, but, in any case, I’m fairly certain that you believe that adding CO2 to air makes thermometers hotter. Feel free to deny it, if you wish.
Thank you for acknowledging that my opinion is of equal value ti yours (ie., worthless), and I don’t need your permission to have an opinion.
Systematic effects are not amenable to statistical analysis. It is an unknown. It must be treated as a Type B measurement uncertainty. It is *NOT* proper physical science protocol to try and fix it with a data adjustment.
There seems to be much contention about statistical methods, but trying to fit what we think we know with what we can see is proper science. Should our guesses, our theories, drive policy? I think not. Someone said something about lies, damned lies, and statistics.
One must apply corrections to each measurement for ongoing systematic effects. There can be different effects between day and night, summer vs winter, ground cover in seasons.
The only other option is to put a value in the uncertainty budget as a Type B evaluation.
One could just not bother wasting time on pointless measurements, too. Save a lot of time and money.
“You avoided the question. Do you dispute that siting and other artificial external influences affect observations and the results from heated thermometers?”
The proper way to handle this is by an entry in the measurement uncertainty budget, not to try and guess at a bias adjustment value.
Or you could just call the entire thing pointless, as Michael Flynn does. No matter what the method is, it seems that any such record is unfit for any purpose of comparison.
Correcting measurements is because they are biased. Science requires positive proof of bias all the way back to individual stations. You can’t even tell us if the bias occurs at Tmax or Tmin. Please don’t tell us the bias is applied at the average level. That is the same as saying both values have equal bias.
Corrections at a higher level is nothing more than curve fitting. In other words, making it appear the way you want it.
Bias adjustments are applied to TMIN, TMAX, and TAVG datasets.
https://www.ncei.noaa.gov/access/monitoring/dyk/nclimdiv-tmax-tmin
This is objectively untrue. Even a cursory reading of the published literature will dispell this misconception.
Really, so when homogenization is done, it is done at the station level on Tmax, Tmin, AND Tavg? Do you see a problem with this list?
Yes, homogenization is performed at the station level, and is performed against Tmax, Tmin, and Tavg. You will need to explicitly articulate the problem(s) you perceive with this.
Homgenization is on par with reading tea leaves, peering into a crystal ball, reading the lines on a palm, or interpreting the meaning of entrails.
“GUESSING” the temperature here is the same as the temperature there is a joke.
Are the temperatures on the west side of a mountain the same as on the east side of a mountain? Are the temperatures in a river valley the same as on the surrounding plateaus? Is the temperature at Pikes Peak the same as at Colorado Springs? Is the temperature in San Diego the same as the temperature in Ramona, CA?
Maybe you could answer just ONE of these questions?
Homogenization does not “make the temperature here the same as the temperature there.” Your questions are ill-posed and your argument muddled.
Homogenization in climate science means comparing a station’s measurements against a reference, typically formed using measurements from other stations. That comparison results in a correction factor applied to the station under comparison.
That does NOTHING but spread the measurement uncertainty even wider.
Hubbard and Lin found in 2006 that regional adjustments, e.g. homogenization adjustments, are useless because of the different micro-climate differences from station to station.
Why does climate science, including you, REFUSE TO INCORPORATE THEIR FINDINGS?
Hubbard and Lin are part of the evidence base that motivated a shift away from simple, instrument-specific constant offsets toward station-by-station homogenization frameworks that can account for both documented and undocumented breakpoints. I don’t know why you insist that the paper has been ignored. The homogenization algorithms you are trying to disparage explicitly were implemented to address the issues identified by H&L and others.
This is just a baseless assertion not backed by any analysis or evidence.
Station-by-station adjustments require CALIBRATION at the station. Otherwise you are just guessing at what the adjustment should be.
Homogenization involves using *other* stations as a reference. And that means spreading *their* measurement uncertainty around to other stations.
A break point at a specific station can result from the stations making *MORE* accurate measurements. Why would climate science want to adjust “more accurate” measurements to make them less accurate?
The station could be located in the past near a heat vent and for whatever reason that heat vent could be shut down thus making the stations readings more accurate.
How do you know? I asked you before if you were god. You would have to be god in order to know the appropriate adjustment without comparison with a calibration standard.
You wouldn’t last a minute in a machinists shop saying you calibrated your micrometer against the readings of other micrometers instead of using a guage block with a calibration record!
Homogenization is not calibration against a standard; it is statistical inference to restore internal consistency in a time series, and leaving structural breaks uncorrected guarantees biased trends regardless of whether a change made a station “more accurate” in an absolute sense.
“Homogenization is not calibration against a standard”
Of course it is. You defined the standard in another message as being the surrounding measurement stations. You are just saying what you need to say in the moment without worrying whether you are being consistent!
What you are asserting is *NO DIFFERENT* than calibrating one micrometer against a grouping of other micrometers without actually knowing the accuracy of those other micrometers!
In other words, you just wind up spreading around the measurement uncertainty of the other stations!
No, using surrounding stations as a reference does not define them as a calibration standard. Calibration requires a known reference with traceable accuracy; homogenization does not assume any station is accurate in an absolute sense. It relies only on the empirical fact that nearby stations share coherent climate variability. An abrupt divergence of one station from many neighbors indicates a change in that station’s measurement model, not a redistribution of their measurement uncertainty. No station’s absolute value is transferred to another; only relative consistency is used to detect and estimate structural breaks. That is inference, not calibration.
Now you are just being silly!
If you use them as a calibration standard then they *are* being used as a calibration standard.
THAT’S WHAT WE ARE ALL TRYING TO TELL YOU! Using other un-calibrated stations as your reference standard only spreads the measurement uncertainty of those uncalibrated reference standards around to the station whose data you are trying to adjust!
Which violates Hubbard and Lin’s finding that you have to create adjustment values on a station by station basis!
You can’t even be consistent in sequential messages! You: “Hubbard and Lin are part of the evidence base that motivated a shift away from simple, instrument-specific constant offsets toward station-by-station homogenization frameworks” (bolding mine, tpg)
First you talk about station-by-station adjustments and then you talk about doing regional adjustments using other stations as a reference standard!
I’ll say it again – all you are doing is saying what you need to say in the moment – you are a CHAMPION HYPOCRITE!
You’re conflating two different things. “Station-by-station” does not mean “station-in-isolation.” It means each station receives its own empirically estimated adjustment rather than a universal constant. That adjustment is inferred using relative comparisons because climate variability is spatially coherent. This is not calibration: no station is treated as a reference standard, no absolute values are transferred, and no measurement uncertainty is propagated from one station to another. Only relative consistency is used to detect structural breaks.
This is exactly the issue Hubbard & Lin identified: instrument changes do not impose uniform offsets, so corrections must be estimated individually, not assumed. Using neighboring stations to detect inconsistencies is how station-specific adjustments are derived. It is not a contradiction, and it does not redefine calibration.
ROFL!!!! You’ve never actually even read Hubbard and Lin, have you?
They SPECIFICALLY point out that it must be done on a station-in-isolation basis BECAUSE OF THE DIFFERENCES IN MICROCLIMATE AMONG STATIONS!
And this is *NOT* doing it on a station-in-isolation basis?
You just can’t help being a hypocrite even in two adjacent sentences, can you?
“empirically estimated” means based on observation AT THE SPECIFIC STATION, Not on observations at other stations!
“That adjustment is inferred using relative comparisons because climate variability is spatially coherent.”
Nice job of trying to slip a change in goalposts in without actually identifying it! Climate variability is *NOT* temperature variability!
Temperatures are not spatially coherent even within a few hundred feet! I live right smack dab in the middle of a section of soybeans and corn. The temperature here is *NOT* the same as at Forbes AFB just a mile away! Even just evapotranspiration makes a difference in the measurements! Substituting my temperature measurements for Forbes AFB or vice versa does nothing but spread the measurement uncertainties around from one station to another!
Do you understand just how foolish this statement looks? Substituting a measurement at one station for the measurement at another station is using the substitution source as a reference standard. It’s EXACTLY the same as calibrating one micrometer to match another micrometer of unknown accuracy!
You are dissembling. It doesn’t matter if you substitute an absolute value or an adjustment value! You are propagating measurement uncertainty from one measuring device to another!
If you aren’t correcting or infilling measurements from one to another then what is the use in identifying a structural break? If you don’t apply a correction, then you are just masturbating for no purpose!
The correct way to proceed in this case is to end the measurement track for the station in question, recalibrate the existing station or install a newly calibrated station, and then start a NEW measurement track. YOU DON’T ADJUST DATA. If the data is not fit for purpose THEN DON’T USE IT! Record it for posterity and stop using it!
This is EXACTLY what Hubbard and Lin said to *NOT* do because the differences in microclimate between stations makes it impossible to derive a correct adjustment value. All you are saying to do is SPREAD THE MEASUREMENT UNCERTAINTY around.
Keep tying yourself in knots trying to justify spreading measurement uncertainty around while assuming that all measurement uncertainty is random, Gaussian, and cancels. It’s amusing to read at least!
Hubbard & Lin rejected universal offsets, not relative inference. Homogenization does not substitute temperatures or calibrate instruments; it restores internal consistency so trends can be estimated. Treating climate analysis as micrometer calibration is simply the wrong framework. The phrase “station-in-isolation” does not appear in the paper, you have inserted this framing yourself.
If you are going to publish average temperatures to the 100th or even 1000th place, then you need to use micrometer calibration as the resolution of your actual measurements.
Uncertainty does apply when claiming that you KNOW temperatures to that degree of resolution.
Surface temperature indices are published alongside probabilistic uncertainty estimates constructed using ensemble and statistical methods that account for sampling, coverage, and structural uncertainty in the observing network. These uncertainty estimates do not depend on knowing the exact calibration or precision of individual thermometers, because the dominant uncertainties arise from network structure and temporal inhomogeneity, not instrument resolution. This inference-based approach to uncertainty estimation is standard practice in other observational sciences such as astronomy, geophysics, and remote sensing, where large-scale fields are reconstructed from heterogeneous and incomplete measurements.
“Surface temperature indices are published alongside probabilistic uncertainty estimates constructed using ensemble and statistical methods that account for sampling, coverage, and structural uncertainty in the observing network.”
You left out MEASUREMENT UNCERTAINTY in the measurement devices!
As usual, we see the climate science assumption that: “all measurement uncertianty is random, Gaussian, and cancels”. Unfreakingbelievable!
Not left out, the contribution to the uncertainty from individual thermometer measurement error is dwarfed by the uncertainty arising from sampling, coverage, and structural uncertainty in the observing network.
I just listed sources of non-random uncertainty in the very comment you are replying to. I fear you are far too wrapped up in your own world to actually engage meaningfully with other people. You would have a happier and more productive time here if you calmed down and replied to things I am actually saying.
Then why are other stations used as references? Again, you are being a major hypocrite!
You can’t correct internal consistency using only the candidate station’s temperature measurements unless you CALIBRATE the candidate station against a reference standard.
You are having to tie yourself into knots with your statements.
It the EXACT same framework you are giving for temperature measuring stations. All you are doing here is using the argumentative fallacy of Argument by Dismissal – just refusing to accept a factual assertion without giving a reason for doing so!
From Reexamination of instrument change effects in the U.S. Historical Climatology Network”, Lin/Hubbard, 2006
“For example, gridded temperature values or local area-averages of temperature might be unresolvedly contaminated by Quayle‘s constant adjustments in the monthly U.S. HCN data set or any global or regional surface temperature data sets including Quayle‘s MMTS adjustments. It is clear that future attempts to remove bias should tackle this adjustment station by station. Our study demonstrates that some MMTS stations require an adjustment of more than one degree Celsius for either warming or cooling biases. These biases are not solely caused by the change in instrumentation but may reflect some important unknown or undocumented changes such as undocumented station relocations and siting microclimate changes (e.g., buildings, site obstacles, and traffic roads).”
station-in-isolation IS YOUR TERM, not Hubbard/Lin’s. But it is clear what they are saying!
I’ll repeat, YOU HAVEN’T ACTUALLY READ THE PAPER. You are just making stuff up. You and climate science have just ignored Hubbard/Lin since 2006. *YOU* continue to ignore their results solely because their results are inconvenient for your assumptions concerning homogenization and infilling of temperatures.
You’re still equating “using other stations to detect inconsistencies” with “calibration against a reference standard.” Those are not the same thing. Hubbard & Lin argue against uniform, network-wide offsets and in favor of station-specific adjustments because biases vary by station and microclimate. They do not argue that a station can be analyzed without reference to others. Their entire analysis is based on relative comparisons to neighboring stations to diagnose instrument and exposure effects.
“Station-by-station” means each station’s adjustment is estimated individually, not that it is estimated in isolation. No temperatures are substituted, no station is treated as an accuracy standard, and no absolute values are transferred. Relative coherence is used only to identify and quantify breaks in a single station’s record. That is exactly the methodological shift Hubbard & Lin helped motivate.
I have, and you need to demonstrate to me that you can comprehend what I’m writing if you want me to keep engaging on this topic. Try to articulate an accurate summary of my position.
I don’t know which paper you are reading. It’s not the one I referenced.
———————————-
———————————-(bolding mine, tpg)
————————————
————————————————————
————————————
————————————(bolding mine, tpg)
Since the microclimate is *NEVER* the same for any two measurement stations there is *NO WAY* to determine adjustment values using relative comparisons to neighboring stations. That is *clearly* what Hubbard and Lin are saying here.
There is no doubt you are dissembling here. The issue is data adjustment values. You are talking about identifying breakpoints and not data adjustments while trying to avoid saying so.
Did you *really* think no one would notice?
“I have, and you need to demonstrate to me that you can comprehend what I’m writing if you want me to keep engaging on this topic.”
You have not provided a single quote from the Hubbard and Lin paper, NOT ONE, to back up your assertion.
I’m not surprised you want to run away. You have nothing to offer except vague assertions backed up with nothing.
AlanJ,
Where in GUM do you find the methodology for calculation of measurement uncertainty of Taverage?
My reading of GUM is that it is designed for original observations only.
Geoff S
You won’t find that in GUM, because uncertainty estimation for climate indices is an inference problem, not a single-measurement problem, and GUM was never intended to cover it.
Keep throwing stuff against the wall and some of it will come back to you.
An inference is not a measurement. That is news for everyone claiming that the atmosphere is warming by 0.45°C./decade.
Merriam-Webster says this.
Population parameters like mean and standard deviation. That doesn’t include statistics like sample means and standard error.
Show us exactly how uncertainty is inferred. I am really interested in learning how one infers uncertainty with any degree of accuracy or precision. That could revolutionize the field of metrology.
I am also interested in how inference allows one to add resolution to actual measurements. That too could revolutionize the field of metrology.
My guess is that “inferences” ends up with values that depends on a personal view rather than having a defined and common scientific and mathematics base. Show me how my guess is wrong.
Statistics – Statistical Inference
You’re still conflating measurement with inference. Climate trends are not single measurements; they are estimates of parameters of a spatiotemporal field inferred from observations. Saying “the atmosphere is warming by 0.45 °C/decade” is a statement about an inferred parameter (a trend), not a direct observation. That is standard statistical practice and has nothing to do with redefining metrology.
Uncertainty is not “inferred out of thin air.” It is quantified by propagating uncertainty through an explicit statistical model using ensembles, resampling, or Bayesian posterior distributions. This is routine in geophysics, astronomy, epidemiology, and economics. It does not add resolution to measurements; it characterizes uncertainty in estimated parameters given finite, noisy, and incomplete data.
GUM governs uncertainty for individual measurement results given a fixed measurement model. It does not apply to estimation of population parameters, spatial averages, trends, or reconstructions. That limitation is why GUM contains no methodology for global or regional temperature indices.
“Climate trends are not single measurements; they are estimates of parameters of a spatiotemporal field inferred from observations”
ROFL! So now climate science isn’t “measuring” temperatures but, instead, are inferring them?
“Saying “the atmosphere is warming by 0.45 °C/decade” is a statement about an inferred parameter (a trend), not a direct observation. That is standard statistical practice and has nothing to do with redefining metrology.”
If no uncertainty interval is provided then how does anyone know the value of 0.45 °C/decade is anything but 100% accurate. It implies the uncertainty is 0 (zero).
Trends inherit the uncertainty of the parent components, just like anomalies do. If the uncertainty interval is greater than the difference being found then you don’t really know if you have found a difference or not!
“It is quantified by propagating uncertainty through an explicit statistical model using ensembles, resampling, or Bayesian posterior distributions.
Bullshite! Ensembles made up of inaccurate members are themselves inaccurate! A half dozen models that are wrong won’t give a right answer!
Resampling without propagating the uncertainty of the data onto the mean is worthless in the real world. There is no way to judge the accuracy of the result!
Bayesian posterior distributions of wrong data won’t give the right answer either!
You don’t even seem to understand what the purpose of measurement uncertainty is for! Climate science defenders NEVER do.
Here’s the very first paragraph in the GUM:
You and climate science are caught in a catch-22 that is inescapable. Either
You and climate science get around this by using the meme of “all measurement uncertainty is random, Gaussian, and cancels. So the estimated value of a measurement can be considered 100% accurate. This is done with the very beginning of the averaging pyramid by assuming T_midrange = (Tmax+Tmin)/2 is 100% accurate!
In other words YOU FAIL THE VERY FIRST PURPOSE OF THE GUM!
And here you are tying yourself in knots so you won’t have to admit this!
Unbelievable.
No one is claiming temperature measurements are “100% accurate,” and no climate dataset reports trends without uncertainty. A statement like “~0.45 °C/decade” is shorthand for an estimated trend with an associated confidence interval that is reported in the underlying analyses. Trends are not measurements; they are statistical estimates derived from many measurements, and their uncertainty is quantified at the parameter level using standard regression and ensemble methods. This is not a loophole or a meme, it is how uncertainty is handled in every field that estimates population parameters from data.
GUM applies to the uncertainty of an individual measurement result given a fixed measurement model. It does not define how to estimate uncertainty in spatial averages, trends, or reconstructed fields. That is why GUM contains no methodology for climate indices, and why treating long-term climate analysis as a micrometer calibration problem is a category error.
You should try calming down and reading the words I am actually saying with an intent to comprehend.
Then exactly what is the measurement uncertainty interval for T_midrange – (Tmax + Tmin)/2?
Give us a SPECIFIC value and show how it is derived.
And then propagate that measurement uncertainty into weekly, monthly, and annual averages using those mid-range temperature values.
Show us!
And the trend is based on assuming the stated values are 100% accurate. You can’t get away from that! Because of measurement uncertainty of the components the trend line is also uncertain – except for climate science!
Trends are developed from measurements with measurement uncertainty. Those measurement uncertainties propagate onto the calculated trend value – except in climate science where it is assumed that all measurement uncertainty is random, Gaussian, and cancels leaving the estimated values as 100% accurate!
While ignoring all measurement uncertainty in the component values used in the regressions and ensembles.
Tell me again how climate science does *NOT* assume measurement uncertainty is random, Gaussian, and cancels. And then tell me again what the measurement uncertainty of T_midrange is!
I’m not going to continue down a metrology error-propagation debate. Climate indices and trends are not single-measurement problems, and they are not evaluated using GUM-style uncertainty budgets. The uncertainties are documented in the literature and are dominated by sampling, coverage, and structural effects, not thermometer precision. We’ve been talking past each other because you’re insisting on applying a laboratory calibration framework to a statistical inference problem. You need to demonstrate to me that you can follow what I’m saying before we continue.
Why am I not surprised. Numbers is just numbers. Measurements are things statisticians and climate scientists don’t need to know about. You just play with the numbers to end up high resolution values that show continued warming.
The fact that you use the labels of °C and °F is not important to you. How closely you know the numbers is all that matters.
By the way, how about trending time series? Is that unimportant also? It must be because it is never addressed in anything you claim to know a lot about. Is it also just an impediment to obtaining high resolution answers?
AlanJ knows he’s been cornered. He can’t defend the climate science methodology of assuming all measurement uncertainty cancels.
So it’s just “put your fingers in your ears and go ‘lalalalala’ till it goes away”
“Trends are not measurements.” “Trends are statistical estimates derived from many measurements.” Where did you learn logic? You are trending values obtained from measurements, that makes the data values measurements, period. It makes trends of measurements, measurements. Otherwise, you couldn’t quote your trend as having 0.45 °C/decade slope.
Worse, temperatures are not trended against an independent variable using a functional relationship, you are trending against time, that is, a time series. Does auto-correlation affect the trend causing a spurious trend? You don’t know! Does seasonality cause a spurious trend? You don’t know!
Don’t try to throw crap against the wall with me. I spent 32 years forecasting call volumes, equipment quantities, expenses, capital, space, productivity, etc. I know how badly simple linear, and exponential regression can lead you astray.
From Dr. Taylor’s book, An introduction to error analysis.
There are internet sites that deal with making correct trends with uncertain data. Note, not the same as residuals.
You also need to recognize that you are dealing with properties as you expand the area covered. Properties are evaluated with varing samples and use the variance between the samples being gathered.
Here is an example. Station A is 20 km from Station B. You measure s1 at station A and s2 from station B. Is there a Gaussian distribution of temperatures between the stations? What is the variance that goes along with the mean value (s1 + s2)/2? It gets complicated when you throw in topology and other environmental variables.
How do you and climate science actually quantify these variables when determining uncertainty in the data? In too many cases, the uncertainties are simply thrown away. Show us the math used.
Trend uncertainty is handled with time-series and ensemble methods that account for autocorrelation and sampling. It is not computed by extending single-measurement error propagation, and I’m not going to relitigate that distinction further.
“All measurement uncertainty is random, Gaussian, and cancels.”
Don’t confuse me with reality!
Really? Trend uncertainty due to uncertainty in data values is not addressed using time-series or ensemble methods. Time series methods remove non-stationarity (changes in means and standard deviations) and seasonality. If I do a first difference on trend data to make it stationary, the values still have the measurement uncertainty as the original data.
Show the math that backs up your assertion. The only proof you have is the fallacy of Argument by Anonymous Source. That is proof of nothing.
You still haven’t given a source in statistics textbooks or online about how data uncertainty is handled in either the mean, standard deviation, or SEM. Is that because you are using the same fallacy?
Really? Adjust the raw data, then adjust the average of the adjustments! Sounds like “climate science” to me!
This discussion is all a side issue you do get the claim is that people die because the temperature goes up and down faster. You guys are arguing over a littering fine while we are having a murder trial.
As intended? But of course.
“The fact that those adjustments bring it in line with the reference standard”
*WHAT* reference standard? A reference that is wrong can’t tell you anything. This has been pointed out to you multiple times. Two wrong measurements don’t create a right measurement! And that is what your assumption is based on.
The USCRN is the reference. Anthony believes it is a bias-free reference with stations situated to avoid site contamination. He says as much in the article we are commenting on.
USCRN is only available from ~2005. Are you saying stations prior to 2005 are not homogenized? That would be news to a lot of folks.
The data are homogenized across the entire length of the surface station record, and comparison to the reference since 2005 shows that there is no indication of the types of biases Anthony cites above in the full bias-adjusted network. Hopefully this clears things up for you.
I just tried to give you an up vote. When I clicked on “+” in the circle, the computer gave you a down vote.
It just means that others added downvotes between the time you loaded the page and the time you submitted your vote. The web server tallies comment votes in real time behind the scenes, but doesn’t send new vote totals to your web browser unless you perform an action that calls to the server (reload the page, vote on a comment, etc.).
Thank you for the Info. I just give you an up vote.
Biased or not the graph shows NO CLIMATE CHANGE over 20 years. None, zip, zero, nada. Oh dear! There goes the Hoax. Back to the Panic Room for refreshments. What about space aliens? Haven’t tried that one in awhile.
The series show significant warming over 20 years, you just need to add a trendline and it is quite obvious:
https://imgur.com/5AcFvpY
Usually it’s not a smart idea to try to eyeball trends in noisy series.
A linear trendline on something that shows one or more sinusoidal oscillations?
Cool.
How about a Fourier analysis first?
I consider significant warming to be 15F rising to 40F over the course of 6-8 hours.
I do not consider it significant warming if a linear trendline shows 0.1C over years.
It’s ok to have unique personal definitions for terms that have common meaning in the scientific community, it’s just also important to be aware of the common meaning so that you can productively engage in discussion with others. Here, significant means “statistically unlikely to be a trend arising from pure chance.”
No, significant means “effecting actual change”. In the US at least, there has been no change in climate hardiness zones for at least 70 years. There has been no climate classification change for over 100 years.
No change means “NO CHANGE”. No change means insignificant.
The *ONLY* significant changes of import are the greening of Earth and the continued growth in record food harvests globally. Neither of which are indications of CATASTROPHIC climate change. It should be named BENEFICIAL climate change.
https://en.wikipedia.org/wiki/Statistical_significance
Your link discusses hypothesis testing. That is, you have a null hypothesis to compare to. What is your null hypothesis?
Don’t simply say zero warming! We are experiencing warmer temperatures from those in the little ice age. That effect needs to be subtracted from the total warming. You appear to be saying anthropogenic warming is the entire and only cause. You need to prove that first before generating a null hypothesis.
The null hypothesis is that there is no underlying trend in the data – that it is a series of random noise. The test of statistical significance tells you the odds of observing the calculated trend in such a series consisting only of random noise. If the odds are 5% or less, the result is said to be statistically significant.
No cause can be ascribed merely by observing the presence of a statistically significant trend.
So you agree that CAGW has no evidence of being caused by anthropogenic CO2?
Your sentence is barely coherent, but I do not agree that the observed warming trend has no evidence of an anthropogenic driver.
Your linear trend of a time series is rudimentary at best.
Time series analysis requires specific operations to meet.
Here are some links.
https://www.geeksforgeeks.org/machine-learning/homoscedasticity-in-regression/
https://www.econometrics-with-r.org/5.4-hah.html
https://en.wikipedia.org/wiki/Homoscedasticity_and_heteroscedasticity
From geeks:
I just plotted a trend line. Do the analysis the way you think it should be done and present your results.
Nope, not going to play that game. You neglected to explain how you dealt with the issues of a time series that I posted.
You can convince us that they are not relevant (with appropriate references and math) or simply say you did not address them.
It is up to you to show how these effects are dealt with.
And to show you I know that I have done some of the work here is a graph with 1st difference to remove auto-correlation and seasonality removed.
https://ibb.co/XrHK837S
I didn’t deal with those issues, I plotted a linear trend line and calculated its statistical significance. You are free to expand the analysis as you please, but the basic conclusion is robust.
Your first difference removes the linear trend from the series.
Once again weather and climate are conflated.
Climate is more than temperature.
That aside, I concur.
Oh look, no hockey stick.
That should be ClimDev is identical to the U.S. Climate Reference Network. Normally you give the the calibration standard first billing. Your statement makes ClimDev the calibration standard.
ClimDev is nothing more than curve fitted to USCRN. It’s adjustments mean it’s baseline is inadequate when compared to USCRN.
And yet BEST uses “estimates”. You can’t QC estimates. 😉
Phys.org’s press release asserts that “extreme day-to-day temperature changes have become more frequent and intense across low- to mid-latitude regions,”
Shouldn’t it be the opposite? In a warming climate, or warmer world, overall daily and diurnal temperature differences should decrease, and particularly in the higher latitudes, and during the colder months.
That’s what the IPCC says:
AR4 Chapter Ten Page 750
Temperature Extremes
It is very likely that heat waves will be more intense, more
frequent and longer lasting in a future warmer climate. Cold
episodes are projected to decrease significantly in a future warmer
climate. Almost everywhere, daily minimum temperatures are
projected to increase faster than daily maximum temperatures,
leading to a decrease in diurnal temperature range. Decreases
in frost days are projected to occur almost everywhere in
the middle and high latitudes, with a comparable increase in
growing season length.
The truly sad thing is that their models didn’t project this. It’s based on analyses done in agricultural sciences. I’m not sure they have *ever* gotten the climate models to actually predict this. The longer growing seasons is a big factor in the greening of the earth.
The climate models are *NOT* holistic.in any way, any shape, or any form. That was Freeman Dyson’s main criticism of them. If they don’t look at the entire biosphere and all the resultant factors then they will *never* be able to accurately predict future CLIMATE. Temperature is *NOT* climate. If it were Miami and Las Vegas would have similar climates. The fact that I can’t see where US hardiness zones have changed at all! Those are climate based.
“Temperature is *NOT* climate. If it were Miami
and Las Vegas would have similar climates.”
__________________________________________________________________
Ha ha ha ha ha ha ha ha!
First chuckle of my day (-:
One must, of course, look at the entire biosphere (aka the multiple coupled energy systems. One must also look at the sun, orbital mechanics, and a variety of extra-terrestrial phenomena, one of which is the moon.
In addition, it means the normal auto-correlation has failed. You can no longer on yester being an indicator of tomorrow.
Maximum temperatures are limited by the Earth’s radiative heat loss even during the day. The heat loss goes up by T^4 while temperature only goes up by ΔT.
Physically, it’s not possible for the diurnal range to increase very much unless minimum temps actually go down, at least over the long-term. At some point the T^4 heat loss will generate a heat loss greater than the higher temp can go up (since the sun’s input remains constant). The T^4 factor becomes a boundary condition on Tmax.
The analysis also equates atmospheric temperature with surface temperature. The earth’s surface will continue radiating heat away based on *its* temperature, not on the temperature of the atmosphere. If the surface gets warmer it radiates that extra warmth away based on T^4. I.e. it loses heat *faster*. It’s an exponential decay. A higher starting temperature just makes the initial slope of the heat loss greater by increasing the initial area under the temperature curve (the actual heat loss).
It’s not even apparent that climate scientists have ever read Planck on “compensation”.
If a black body is isolated it radiates a flux F1. If a reflective object is inserted sending back a flux of F2 then the black body immediately radiates away F2 along with changing its F1 value to F1-F2.
Thus the black body immediately loses the reflected heat but loses its own heat at a slower rate. BUT it still cools. The reflected heat does *not* warm the black body. It does not create a positive ΔT in the black body. ΔT may be a lower negative value but it will never go positive.
If F2 ever equals F1 then the bodies are considered in equilibrium. But CO2 is *not* a perfect reflector. Neither is water vapor. F2 will never equal F1. The heat loss of the black body will continue 24 hours per day.
“The earth’s surface will continue radiating heat away based on *its* temperature, not on the temperature of the atmosphere.”
This is important. The GOES Band 16 images shown as a time-lapse video help us appreciate this fact. Under clear conditions, the rapidly rising and decaying emission from the surface (i.e. the “skin”) is plainly “seen” in the visualizations. Band 16 is at the edge of the “atmospheric window” portion of the spectrum, so a large portion of the signal is T^4 near-black-body emission from the surface.
Maybe this can help those climate scientists you mention.
The old fart writes a lot better than me! I have my doubts most climate scientists today could follow this past the first two minutes!
(since the sun’s input remains constant)
Is assumed to be constant
Thank you. Yes, assumed to be constant to simplify things. I should have listed this out.
If it’s not constant the same thing happens, the calculations just get messy and you need to know the functional relationship.
The output of the sun is relatively constant, however the input to my roof top solar varies every day.
The output of the sun is relatively constant. Perhaps, but we were discussing the sun’s input to the earth energy systems.
That input varies day by day based on axial tilt and an eccentric parabolic orbit. The eccentricity varies based on the orbital mechanics of everything in the solar system.
Many of the results I have seen assume a fixed solar temperature and a mean solar orbit.
On land most of the heat is removed from the surface by conduction and convection and by the wind. For example, a sirocco is hot wind that comes from the desert. A zephyr is light warm breeze from the west.
Over the ocean heat is removed by evaporating water and by the wind which also moves water on to the land.
As a body, the earth will also radiate based on its temperature. Conduction and convection work to change the temperature of the surface but it will still radiate at whatever that surface temperature is.
On a clear, windless night in the desert the heat loss by radiation is very much radiation oriented. In fact, if you look at your daily temperature curve on a clear windless night, where ever you live, that you will see a curve at night that is very much exponential decay, e.g. radiation. It’s not perfect, of course, because of conduction and convection but it’s very close.
The solid surface radiates in all directions (yes, downwards, sideways, as well). All matter above absolute zero radiates IR in all directions. If the radiation interacts with other matter, like colder air, then the air gets hotter. If the air is moving laterally, the “heat” of the ground appears to move. Eventually, the hotter air loses its heat to space, and harmony is restored.
So convection is just radiation and fluid movement. Conduction is just radiation from hotter to colder, but until Einstein made his contribution, radiation was assumed to be waves. Close, but no cigar.
I don’t buy what you two are selling. When I go from the living room and onto the concrete patio in summer, I feel a lot of really hot air and the concrete gets hot.
I live in Burnaby, BC which is east of and contiguous with Vancouver. In summer there is 16 hrs of sunlight. A lot of hot UHI air blows into Burnaby from the west. There so much UHI warm air that little snow has fallen in recent years.
I’m not selling anything. I’m not disputing what you “feel”, either. You seem to disagree with something I wrote, but you refuse to say what it is.
Maybe you just don’t like the cut of my jib?
“When I go from the living room and onto the concrete patio in summer, I feel a lot of really hot air and the concrete gets hot.”
So what? No one is saying the surface of the earth is completely homogenous concerning temperature.
There is no doubt that the hot concrete is sending out a larger flux intensity than the cooler living room floor. The flux *is* temperature dependent.
That flux intensity decays by the T^i where i can be as high as T^4 but is probably less because the earth is not a perfect black body. The flux does *NOT* stay at the same level unless a heat source is driving it, e.g. at night there is no sun so no driver.
Again, go look at the temperature curve at night from an outside thermometer on a clear, windless night. I’ll bet you it is highly exponential. And the heat loss is related to the integral of that exponential curve. Meaning the heat loss is actually functionally related to T^5. And that heat loss occurs 24 hours per day, not just at night. The earth actually radiates away *more* heat during the day than it does at night. Too many people think the earth only cools at night.
So we return to the mid-1800s where it was sensible heat and heat radiation.
Nope.
Sparta, not at all. As Tyndall said “. . . the warmth which you feel
is due to the impact of these ethereal billows upon your skin.”
That’s radiation as we know it. Replace “ethereal billows” with “photons” if you like. The skin senses this as “sensible heat”.
I have no difficulty with either description – Tyndall’s mid-18th century “ethereal billows” or “photons”.The meaning is clear. Conduction and convection are descriptions which are sometimes used, in my view, in a somewhat misleading fashion, apparently based on ignorance and gullibility.
The world of “climate science” is littered with words which can seemingly mean anything the user wants them to mean. Oh well, one clarification at a time.
There is a latent heat content to energy transfers on land. Not as great as the oceans, obviously, but when one is computing an energy imbalance to 0.6% give or take, the details matter.
I had a quick shufti at one of the paper’s references, as the meaning of DTDT seemed ambiguous. I believe the authors are actually talking about changes in temperature day-to-day, but I’m not sure.
The paper’s a fairytale in any case, based on the belief that adding CO2 to air makes thermometers hotter, and babbling about “forcings”.
The presence of an atmosphere compresses diurnal range, as you imply. Maybe the authors are simply ignorant and gullible, as well as confused. A clear description of DTDT might help.
“If a black body is isolated it radiates a flux F1.”
A bold claim. Is “radiation” a “flux” or not, Tim?
TROLL ALERT!
Define your terms:
Your tactic of arguing about definitions must start with you first DECLARING what you think the terms mean using math, otherwise you just argue ad infinitum about minutia.
“arguing about definitions”
That’s what we call science, Jim. It’s completely unfamiliar to you. Perhaps you should sit down.
“you first”
No, I’m not the one making false statements. Tim made a false statement, so I asked him to define his terms. He got extremely confused when I did that, as you can see below.
No, science begins with you declaring what you think, call it a hypothesis. Show us your hypothesis and definitions so it is possible to understand your interpretation.
If you are unable or unwilling to do that, then you are nothing more than a troll.
Certainly, I can tell you my definitions. They are the standard physics ones, and they match Willis’s definitions too. Here they are:
1) Radiation is energy
2) Energy is the capacity to do work
3) Work is what happens when a [net] force is applied to an object [resulting in the expenditure or transfer of energy]
4) Power is the rate of doing work
Therefore, “radiation” is “the capacity to do work”, travelling across the universe at the speed of light. Nothing more, and nothing less. This capacity, of course, like every other kind of energy, is measured in Joules. Do you disagree with any of these definitions? And if so, why?
Get lost troll!
Go read Planck for meaning and context!
“meaning”
But I’m not the one having problems with his definitions, am I? Of course not. Here is what you have told us so far:
“Radiation is *NOT* radiant flux.”
and
“Radiation is an ENERGY FLOW, it is a FLUX.”
Can you pick one and stick to it, please?
“troll”
No, I am not the one spouting contradictory nonsense, asking meaningless questions, and then lying. That would be you, wouldn’t it? Of course it would.
What’s the matter, Tim? Ran out of nonsense again? Or is it easier to run away like a petulant six-year-old than to try to defend yourself?
Planck, Tim? Really? You know that the field of radiation physics has advanced just a little bit during the last 125 years, right? What? You didn’t know? And whose fault is that?
But much much worse than that, if such a thing were possible, you appear to be completely ignorant of one of the most important (if not the primary) pillars of logic itself. This particular foundational concept was first formally taught by a chap you may have heard of – a bloke they called Aristotle. Specifically, I am referring of course to his Principle of Non-Contradiction. That one literally predates Jesus Christ! Not to mention the entirety of the Roman Empire! And you’ve still never heard of it! Where have you been all this time?
So for someone whose view of radiation physics is more than a century out of date, and grasp of logic, more than two millennia, you seem preposterously arrogant. Who taught you to behave like this?
Your so-called “professors” have a lot to answer for, that’s for sure. As do theirs. Not to mention your parents.
Sit down and stay in your lane, Tim, whatever that is. It obviously isn’t physics. Or logic.
You are troll, nothing else.
You have *NEVER* answered *my* questions to you.
Until you can answer those you have nothing to offer except sheer and utter idiocy.
I worked in engineering microwave links — RADIATION — for four years for a major telephone company. I *know* how radiation works. What radiation intensity is and what a watt is.
All you have is word play, equivocation, false appeals to authority. and no references to offer at all. You *never* show any actual math.
When you can actually answer the two questions above, then come back. Till then, like I said, you are just a troll.
“You are troll”
Your grammar is horrendous, and you mis-spelled “physics teacher”. Not being able to tell the difference between those two types of people goes a long way towards explaining your atrociously poor grasp of this subject.
“You have *NEVER* answered *my* questions to you.”
No, Tim, that is a lie. (Your second lie in this lesson, or third if you include calling me a troll.) I did answer them, by pointing out the erroneous assumptions behind them. Those answers, of course, went right over your head, but that’s on you.
“I worked in engineering”
And that right there is your biggest problem (of many). This is a physics (and logic!) discussion, and you are not a physicist or logician but an engineer. Sit down and stay in your lane.
“I *know* […] what a watt is”
No you don’t. That’s simply another lie, isn’t it? (What’s that now, four?) If you did know that, you would be able to explain very clearly how a Watt isn’t something that an object can “emit” or “radiate” all by itself. Can you tell us how to determine whether we are in the presence of a Watt or not?
Yes, I do realize that in the limited domain of radio engineering it always looks like the objects you’re working with are “emitting Watts”, in all the scenarios you have encountered so far in your career, thus making it extremely easy to mislead yourself about what Watts are – but that is only because you’re ignoring the rest of the universe. Physicists of course can’t get away with such sloppy fantasies. Not if they want to be taken seriously. But that has never stopped you, has it?
“All you have is word play”
You mis-spelled “scientific definitions”. You have a serious problem with your spelling, even worse than your problem with logic. Are you planning to explain your self-contradictions (and lies!) any time soon? Or are you terminally incapable of a correct and coherent thought? The kind that Aristotle would have approved of?
Here is your non-Aristotelian self-contradiction again, in case you forgot:
“Radiation is *NOT* radiant flux.”
and
“Radiation is an ENERGY FLOW, it is a FLUX.”
“I did answer them, by pointing out the erroneous assumptions behind them. Those answers, of course, went right over your head, but that’s on you.”
Malarky! You claimed radiation is measured in joules. So I asked you how many joules there are in the output of a 100watt transmitter of radiation. I asked you how many joules there are in the output of a 1000watt transmitter of radiation.
You have yet to answer. The only one whose assumptions are wrong is you.
The rest is nothing more than equivocation and ad hominems. If you can’t support your assertion that radiation is measured in joules then get lost. No one wants to hear the rest.
“Malarky!”
No it isn’t.
“You claimed radiation is measured in joules.”
It is, and no, it’s not my claim. I pointed out where it came from.
“100watt transmitter of radiation [power]”
And I told you that there is no such thing in physics. It’s not my fault that you can’t grasp that. Physics is certainly mysterious, isn’t it? That’s why you didn’t study physics and went straight into engineering, where you didn’t have to bother yourself with such things. Other people do all the thinking for you, and produce approximations that you can use to create useful devices. But you have no clue why these are approximations.
Let’s take a closer look at two more of your claims, Tim:
“I *know* how radiation works.”
In the limited context of the radio engineering that you’ve been exposed to so far, yes, I’ll grant that you do. But not what it is. That’s the “science” part.
“What radiation intensity is”
No, that part can’t be true, because you told us that you don’t know what radiation itself is. The “intensity” of an unknown and undefined mystery phenomenon is obviously itself undefined and meaningless. Right? More science and logic, yes I know it’s hard, but try to stay focused here.
“You *never* show any actual math.”
That, of course, is because your comprehension problems are not due to a lack of mathematical equations. They’re much deeper than that. Indeed, they’re so fundamental that I can hear what’s left of Aristotle turning in his grave, all the way from here, halfway across the world. Look what you’ve done to the venerable old fellow!
Yes, I know that engineering is all about equations, and that’s all you’ve ever learned, so naturally you think that’s all there is. But science is about concepts. Do you know what a concept is, Tim? Of course not. You’ve never had to learn any, because your engineering professors told you when to apply each equation, and off you went into the world, blissfully ignorant and arrogant…
You think you can deride and defame Planck by implying his knowledge was limited in terms of heat and thermodynamics? Try again dude. You imply that you know more than him because you education came later.
You want to prove that, show us the equations that Planck has in his treatise that are wrong and show how they should have been derived.
Every time you see the IR leaving earth based on temperature and see an “ideal” curve, what is the name of that curve? What work is it based upon? Planck used Maxwell and Boltzmann works in his derivations. I suppose the works of those scientists are faulty also.
Give us a break from dealing with trolldom.
“You think you can deride and defame Planck”
I only pointed out that his ideas are now 125 years out of date. They were revolutionary at the time, and there’s nothing wrong with his concept of a quantum of energy (which isn’t what we are discussing). But his view of electromagnetic radiation was pretty primitive, as was everyone else’s at the time. Since then, the field of physics has advanced quite a lot. When did you go to school, anyway? 1890? Do you still think that “heat” is caused by “caloric” or “phlogiston”?
You didn’t answer one of my questions. Are Planck curves different today than when Planck derived the background and the equation? Are Maxwell’s equations different today than when he derived them? Have the Boltzmann and Planck constants been found incorrect or are they still used today?
Your statement is nothing more than an ad hominem hoping that will prove you correct. It doesn’t. It only shows that you can not prove their work incorrect using mathematics to do so.
“You didn’t answer one of my questions.”
Why would I? They aren’t relevant to the topic at hand, which is that Planck’s view of radiation physics is 125 years out of date. The field has moved on quite a bit since then. Not that you would know.
“using mathematics”
Mathematics isn’t going to do you any good at all if you don’t know what any of the concepts mean. And particularly if you can’t measure any of the things you are talking about. Formulas can be right or wrong, and you would have no clue either way, would you?
Exactly what I figured you would do. Deflection by creatively telling everyone that they are dumber than you. ROTFLMAO
I’m not the one who’s “deflecting”, Jim. I pointed out that Tim’s claim was incorrect. Planck’s antique opinions about a different topic aren’t going to fix that. And meanwhile Tim hasn’t done anything but deflect, lie, and contradict himself. Who is the “troll”, exactly? And why?
As for who is “dumber”, I pointed out that contradicting oneself, as Tim did, constitutes a complete abdication of rationality in all its forms. He also can’t read, nor is his grammar anything to write home about. Is that “dumber”? You tell me.
“ I pointed out that Tim’s claim was incorrect”
But you never said WHY it was incorrect. You just claimed that electromagnetic waves are measured in joules and not in joules per second. When asked how many joules a 100watt signal has you whiffed on answering.
Don’t deflect. Give an answer or admit that EM waves, i.e. radiation, are measured by intensity in W/m^2. And that you need a time interval involved to determine the number of joules transmitted.
“How many joules does a 100 watt EM wave have?”
The question isn’t even well-defined, so there is no logical answer. And you know that, which makes you a hypocritical lying self-contradictory troll. Not a great performance so far, to say the least.
The biggest problem is that there is no such thing as a “100 watt EM wave”. So it doesn’t “have” anything. That phrase is at best an engineering approximation, and at worst a complete fantasy. You can never prove that such a thing exists by itself. Because it doesn’t. That’s not how Watts work. Or EM “waves”, for that matter.
But if we sweep that “minor” issue under the rug for the sake of discussion, then in order to turn the rest of the question into a well-defined one, you would first have to specify whether you are talking about Joules of energy or Joules of work. You know those are not the same, right?
So I’ll answer the rest of this illogical question both ways, just to speed things up.
If you mean “Joules of work”, then 100 watts of EM radiant power, developed from object A to object B, corresponds to 100 Joules per second of EM radiant work, or in other words 100 Joules (of energy) per second transferred from object A to object B, via EM fields. That’s all by definition, of course.
If you mean “Joules of energy”, then the question makes no sense. It’s exactly the same as asking “how many Volts are there in 100 amps of current?” Can you answer that for me, Tim? Or would you rightly conclude that whoever asked such a question had no clue what he was talking about?
It’s perfectly well defined for anyone that knows anything about propagating EM fields.
*YOU* claimed that an EM wave was made up of joules. And a 100watt EM wave is certainly an EM wave. So is a 1000watt EM wave.
Field strength meters exist. If you had any real world experience in the subject you would know that.
I mean what *YOU* said. An EM wave consists of joules. So how many joules in a 100watt EM wave?
Which is what *I* said. At least partially. EM field strength is like pressure. In order to know what it is capable of it has to be integrated over an area. Joules/sec tells you nothing if you don’t know the area over which it is being applied. Work is force over a distance. F = PA. Joules/sec is force. Joules/sec = Watt Watt/m^2 * m^2 tells you how much force is being applied.
“If you mean “Joules of energy”, then the question makes no sense. It’s exactly the same as asking “how many Volts are there in 100 amps of current?””
Gobbledy-gook. Word salad. EM wave intensity is Watt/m^2. That joules per second per unit area. You’ve never once done any calculus work with energy transfer. df = I (cosΘ/r^2) dA. r^2 is the inverse square law, go look it up. cosΘdA gives the perpendicular component of the incident radiative flux I. You have to integrate the perpendicular component of the incident EM wave over the area of the receiving object. That will give you the total watts/sec received. Then you integrate the result over time to get the total joules received.
Do you even know how the E part of the EM wave propagates? How many joules are in the wave when it crosses 0 (zero)?
So far, everyone I’ve seem you criticize for not knowing something actually knows *MORE* than you on the subject. Give it a rest!
“It’s perfectly well defined for anyone that knows anything about propagating EM fields.”
No it’s not.
“100watt EM wave”
There’s no such thing. No one has ever proven that there is. Power doesn’t work like that.
“Field strength meters exist.”
Sure they do. So what do you think they are measuring, and how?
“So how many joules in a 100watt EM wave?”
There’s no such thing as a 100 watt EM wave. The question is nonsense.
“EM field strength is like pressure.”
And that is exactly what I’ve been trying to tell you. Because electromagnetism is, in fact, a force – indeed one of the four fundamental forces of nature. Do we measure pressure (or force) in Watts, though?
The entire point of my physics lesson here is that objects emit EM field strength (pressure) based on their temperature. That’s all they do. This field strength consists of energy, which of course we measure in Joules. (And we can denote the force exerted by these fields in other units when it’s more convenient, although Watts is not one of them.) And note, very importantly, that emitting EM fields does not imply the loss of energy. This is in exactly the same way that a spring can exert a force without doing any work or losing any energy.
“EM wave intensity is Watt/m^2.”
If by “EM wave intensity” you mean “the magnitude of the Poynting vector”, then sure. But that magnitude (and consequent energy flow) depends entirely on everything in the environment – indeed every contribution of EM field strength (force) from, in the general case, every electromagnetic emitter in the universe. Not just whatever single radio transmitter or light bulb you happen to be currently looking at. Right? (or, in an isolated case, whatever EM field emitters are sharing the same isolated region of spacetime)
“Give it a rest!”
Why? You haven’t finished learning your physics properly yet. Not until you stop claiming that objects can “emit flux based only on their own temperature”. That’s a fantasy. Not physics.
me: “100watt EM wave”
I have both a 100watt EM wave transmitter and a 1000watt EM wave transmitter in my basement. And you are telling me that they don’t put out a 100watt and a 1000watt EM wave?
How about my 1200watt microwave transmitter I cook food with? It doesn’t put out a 1200watt EM wave?
You measure pressure in LBS/INCH^2. Exactly like measuring flux in WATTS/METER^2.
The amount of force exerted on a piston by a pressure flux is dependent on the area of the piston. The amount of “force” exerted on an object by an EM flux is dependent on the area of the object.
The EM wave is *NOT* a force. It is a flux just like pressure is a flux in fluid dynamics. If the area is 0 (zero) so is the force exerted. BUT the flux still exists!
“This field strength consists of energy, which of course we measure in Joules”
The strength of an electric field in an EM wave can be measured as V/m. Joules are not measured in Volts/meter. The v/m in a propagating EM wave varies in amplitude from -V to +V. How many joules exist in an EM wave when amplitude of the electric field is crossing 0 (zero)?
I asked you this one already. You whiffed on addressing it. Why?
“f by “EM wave intensity” you mean “the magnitude of the Poynting vector”, then sure. But that magnitude (and consequent energy flow) depends entirely on everything in the environment – indeed every contribution of EM field strength (force) from, in the general case, every electromagnetic emitter in the universe. Not just whatever single radio transmitter or light bulb you happen to be currently looking at. Right? (or, in an isolated case, whatever EM field emitters are sharing the same isolated region of spacetime)”
Word salad meaning exactly zero. The operative words (which you ignore) in your statement is “energy flow“. You didn’t say just “energy”, you said energy flow. What is an energy flow?
“And you are telling me that they don’t put out a 100watt and a 1000watt EM wave?”
Yes. Bizarre but true. Because the amount of power developed by your transmitters to the receivers depends on the energy in the environment that you are transmitting to. It is not an independent characteristic of your transmitter, despite the misleading labeling by the manufacturers. (They are making some assumptions about the environment in which you will operate their equipment, assumptions which are usually true. But you can make those assumptions false, and then see how much power your transmitters develop. Try it and see!)
“You measure pressure in LBS/INCH^2. Exactly like measuring flux in WATTS/METER^2.”
Yes. And what needs to happen in order for pressure to develop power (flux)?
“Joules are not measured in Volts/meter.”
In order to convert Joules to Volts you also need to know the charge in Coulombs, yes. What is your point? There are no Watts here. That is my point.
“What is an energy flow?”
That is another way of saying “work”, which is what happens when you have some energy at point A, and an energy gradient towards point B with a different energy, of course.
You might want to go back to school and reinvestigate how EM waves interact. Study Maxwell’s equations again. EM waves to not contribute to a universe wide common field strength. Simple logic should tell you that. You might explain how your car radio would discriminate a specific signal from a universe wide EM wave.
“EM waves to [do?] not contribute to a universe wide common field strength.”
Who told you that? It wasn’t Maxwell. What do you think the Poynting vector tells us?
“Simple logic should tell you that.”
No, “simple logic” tells me no such thing.
“You might explain how your car radio would discriminate a specific signal”
There is a lot of frequency discriminating electronics going on in there, but the radio is indeed listening to the entire universe. If two transmitters are transmitting on the same frequency anywhere in the universe, the radio will pick up both of them. And if your radio is sensitive enough, it can indeed listen to a transmitting station from Alpha Centauri.
Planck’s research may be “antique” to you but it is still relevant since you have no ability to refute it other than ad hominems.
Maybe you can include Maxwell and Boltzmann while you are at it.
Planck’s research was relevant at the time, yes, although his description of EM radiation was primitive and somewhat poorly defined by today’s standards. That was 125 years ago. Physics has moved on since then, because 125 years is a very long time in physics. You haven’t moved on, though. Whose fault is that?
You can see that Tim’s claim is obviously false, because he can’t measure what he’s talking about. And as Willis the fisherman helpfully told us, anything that can’t be measured is an imaginary construct. Planck couldn’t measure Tim’s claim either, so asking him to help from beyond the grave is futile.
If you would like to show me which of Planck’s claims you think could back up Tim’s claim, I’ll be happy to refute that too. Just saying “read Planck” isn’t going to cut it. Nothing of Planck’s work that you have provided so far is relevant to Tim’s claim, of course.
The other way to tell that Tim has no clue what he is talking about is that he contradicted himself. Are you just going to pretend that that is completely normal behaviour for rational human beings? Even Aristotle knew better than that. And that was not just hundreds of years ago, but thousands. Try to keep up. Either that, or sit down and stay in your lane.
Same old shite, different verse. Funny how Planck’s constant and Boltsmann”s constant still carry on. Same with Maxwell’s partial differential equations. If all you have is “these folks are old and we’ve moved on, then you have no substance to anything that follows. If you were the physics guru that you claim, then it shouldn’t be hard to show references or the math that shows all these fellows were mistaken in their theories.,
Another example is Newton’s work. It has been shown that using his laws, one must meet certain assumptions, but that doesn’t make his work any less today than when he wrote their work.
Refutation requires proof. If you can’t provide proof, then you claiming something is wrong or calling people less knowledgeable than yourself is nothing more that self-aggrandizing which is generally not done in polite society.
He thinks an EM wave is made up of “joules”. It’s not even obvious that he knows what a “joule” is, let alone joules/sec.
“He thinks an EM wave is made up of “joules””
That’s because it is. What do you think “radiant energy” means?
Why don’t you just tell us which of the following two contradictory claims you would like to stick with:
“Radiation is *NOT* radiant flux.”
or
“Radiation is an ENERGY FLOW, it is a FLUX.”
“That’s because it is. What do you think “radiant energy” means?”
An EM wave has a duality of makeup. 1. Rotating electric and magnetic fields and, 2. photons.
The electric and magnetic fields carry energy based on frequency. Same for the photons. The *strength* of the electric and magnetic fields, i.e. their amplitudes, define the amount of energy being transported. Same for the photons.
The EM wave is not made up of “joules”.
Think of an EM wave as a “bus”. Low frequency EM waves are short in height with a limited number of seats for small photons. It is the *bus* that transports the photons (shorter in height). Higher frequency EM waves are buses with more height and more seats. It is still the bus that transports the photons. Short buses can have a range of seats representing low power vs high power generating. Taller buses can have a range of seats depending on low power vs high power generation.
The EM wave is still a bus, not a joule! The carrying capacity of the bus is defined by the metric of “flux”, think of it as the impact of the bus on the roadway. A short, long bus with lots of seats can transport just as many “W/m^2” as a tall bus with limited seating – i.e. the flux is the same, the impact on the roadway is the same.
Now, you can nitpick this analogy to death if you want, but it still suffices to show that you truly have few points of congruity with the real world. An EM wave is not “joules”, “joules” is just what the EM wave carries.
“The electric and magnetic fields carry energy”
Correct
“The EM wave is not made up of “joules”.”
What units do you measure “energy” in, Tim?
“Think of an EM wave as a “bus””
Not a bad analogy – I prefer “conduit”, but “bus” works too, sort of… let’s see how we get on with it.
“The carrying capacity of the bus is defined by the metric of “flux” [power],”
Okay, that analogy drove into the ditch quite quickly. No, power is not the “carrying capacity” of the bus – which is a constant regardless of what the bus is doing (e.g. 50 passengers). No, power is closer to the “current speed” of the bus (you can tell by the “per second” in the definition of a Watt). But how fast does the bus go at any given moment? The terminal where the bus started its journey doesn’t get to tell you that, does it? There are a lot of other factors involved, such as the horsepower of the engine (which doesn’t really apply in this analogy), wind speed (ditto), but also, more importantly for this particular analogy, the slope of the road. Right?
“you truly have few points of congruity with the real world”
I am not the one who made this unmeasurable claim:
“If a black body is isolated it radiates a flux F1”
That was you being completely divorced from the real world, wasn’t it? Of course it was.
“Okay, that analogy drove into the ditch quite quickly. No, power is not the “carrying capacity” of the bus – which is a constant regardless of what the bus is doing (e.g. 50 passengers).
How did the passengers on the bus get there without the bus?
The bus (i.e. the EM wave) is what gets it all there. It *is* the power, the flux, the Watts/meter^2! Without the bus the energy just hangs out at the bus stop, goes nowhere, does nothing.
No, EM waves travel at the speed of light in a vacuum. There is no different “speed” for the EM wave. There is only how much power is delivered.
The speed of light, at least in a vacuum.
All buses (EM waves) travel at the same speed.
All buses (EM waves) travel at the same speed all the time.
The power delivered by an EM wave is related to the peak V/m value of the wave. The greater the peak V/m the greater the power delivered. The peak V/m value has nothing to do with the speed of the wave. The speed of the wave has nothing to do with the peak V/m of the wave.
There is a *reason* why you can’t tell me how many joules there are in a 100 watt EM signal. That should be painfully obvious to even those untrained in physics. If radiation was joules you could tell me how many joules are in every generated EM signal. Then you could tell me how many joules (i.e. energy) that EM signal delivers to an impacted surface.
*I*, on the other hand, *can* tell you how much power there is in an EM signal and how much energy is delivered to that impacted surface. An EM signal is POWER, watts, delivered per unit area.And a 100 watt signal delivers more POWER than a 10 watt signal.
“*I*, on the other hand, *can* tell you how much power there is in an EM signal ”
No you can’t, because there is no such thing. How are you planning to measure that “signal power” and prove it to me? And does your measurement depend on the electromagnetic field strength emitted by the measuring device, or not?
Neither Planck, nor Maxwell, nor Boltzmann, nor Newton ever claimed (as far as I am aware) that “objects emit flux”. That’s because they were physicists, and therefore they knew better. You aren’t, and therefore you obviously don’t. If they made that claim, please show me.
Perhaps you should quit googling for the word flux and actually study what these great minds experimentally derived.
From, 6.2: Electric Flux – Physics LibreTexts
From, Microsoft PowerPoint – 2DL Spring 2009 notes 6 [Compatibility Mode]
See the section that describes Planck’s Radiation Law; “Max Planck, 1900 spectral energy density of blackbody radiation”
If that equation is integrated over the entire spectrum, guess what equation you end up with? The Stefan-Boltzmann Law:
e = σT⁴
Your word game is very tiresome. Why don’t you quote some mathematical equations describing what your definition of “flux” is. I don’t think you can do so! Show what the physical units are to achieve your definition.
“You can see that Tim’s claim is obviously false, because he can’t measure what he’s talking about.”
Malarky! You’ve obviously never heard of a field strength meter! When I was a junior in college, I worked one summer for a professor from Wichita State University doing field strength measurements of broadcast radio stations along a new 345KV power line route. The results were to be used by (then) Kansas Gas & Electric to evaluate claims of radio interference from people along the route.
You don’t *NEED* to measure what you can prove using math. Gauss’ Law can be proved by math. You don’t need to measure all the charges in a closed volume to verify the electric field strength they cause.
You sound like a high school sophomore taking his first natural science course. You can’t refute anything with references of rmath. All you have are claims that people don’t know what they are talking about. Give it a break man!
“field strength meter!”
We aren’t talking about field strength meters, Tim. Stop deflecting. We’re talking about definitions. Which of the following two contradictory ones would you like to stick with?
“Radiation is *NOT* radiant flux.”
or
“Radiation is an ENERGY FLOW, it is a FLUX.”
Once you know what radiation is, then we can talk about how to measure it, and what the measurements mean.
“Give it a break man!”
Not until you stop spouting unphysical contradictory nonsense and fantasies about objects “emitting flux”. That’s not what “power” means, because it’s not something that can be “emitted”. What, precisely, do you think a “watt” is?
Of course we are! EM waves are made up of an electric FIELD and a magnetic FIELD. Those FIELDS are measured with FIELD strength meters!
No, you are just trying to weasel out of having to admit your initial assertion is garbage.
You don’t have a clue as to what radiation is. The proof is you saying Maxwell equations aren’t really correct any longer, that “physics” has moved past them.
You don’t have a clue was to what power is, i.e. a watt. A low frequency, high “power” EM wave can have the same amplitude of field strength as a high frequency, low power EM wave. It is the size of the EM wave that determines the flux.
Since you can’t state how many “joules” a low freq/high power EM wave has vs a high freq/low power EM wave your “definition” of an EM wave being made up of joules rings pretty hollow. It’s even worse when you try to calculate how much energy is imparted to an object when an EM wave is incident on it based on the number of “joules” in the EM wave.
“You don’t have a clue as to what radiation is.”
This is coming from the clown who said
“Radiation is *NOT* radiant flux.”
and then
“Radiation is an ENERGY FLOW, it is a FLUX.”
Would you like to try that again, Tim?
“The proof is you saying Maxwell equations aren’t really correct any longer, ”
I never said that, of course. That is simply another lie.
If you don’t have a flux in the hose going to your garden then it doesn’t matter how big the hose is, how much water is behind the dam supplying your water, or whether your faucet is turned on or off – you won’t supply any water to your garden.
If you don’t have an EM wave, i.e. a flux, carrying power through the “aether”, then you won’t deliver any power to anything.
Radiation *is* a flux. It is an EM wave. It is an EM wave carrying power to a receiving surface.
you: “But his view of electromagnetic radiation was pretty primitive, as was everyone else’s at the time. Since then, the field of physics has advanced quite a lot.”
Hmmmmm, so this is *NOT* saying that Maxwell was wrong, eh?
Exactly what, then, *did* you mean to imply? For if the equations are correct then it doesn’t matter how long ago they were developed. Arguing that age is a factor is just the argumentative fallacy of Argument to Age. The issue is whether the equations are right or wrong, not how old they are.
“Radiation *is* a flux.”
Then why did you say
“Radiation is *NOT* radiant flux.”
?
Word games are the antics of clowns.
Phycs.org ?
Fizzy orcs.
I will leave the stupidity of how they did the study but I love the absolutely crazy leap to “daily volatility creates “a climate roller coaster” harmful to public health”
If that is the biggest or even measurable risk in any persons life then I have got to say to that person is already dead they just haven’t been declared yet. This is beyond pseudo science this is in the certifiable nutcase category.
Those who wrote this stuff need to be rounded up and placed in an asylum.
Ditto those who published it.
A couple of months ago it was all about climate whiplash and now it’s roller coasters. The climate ‘crisis’ caters for for all tastes when it comes to fun.
Everything in the universe exists for one purpose – to prove global warming.
And the most vulnerable groups are the minorities favored by wokeness.
From the study:
“Conducted by researchers from Nanjing University and the Institute of Atmospheric Physics of the Chinese Academy of Sciences (CAS), the study was recently published in Nature Climate Change.”
Why do I get the sneaking hunch here that this study, since it comes from China, is an attempt to undermine fossil fuel production in the U.S. and the western world. I could be wrong about this hunch, but I can’t get the thought out of my head.
The notion that the Chinese may be using climate scare narrative for the same purpose as climate alarmists here in the West leaves me mistrusting what any alarmist says about it all the more.
One might conclude that this is another ‘medical scare.’ Bold Mine From the paper.—
“We use an optimal fingerprinting method to detect and attribute the observed changes in the amplitude of seasonal diurnal temperature difference (SDTD). This approach is based on a generalized multivariate linear regression model…..The noise term (), representing internal climate variability, was estimated from two independent sources:……We compute the spatial correlation between long-term changes in extreme day-to-day temperature variability and long-term changes in the DTD of various variables within 40° S–40° N across reanalysis datasets and CMIP6 models……..The mortality dataset collected in this study includes the following: (1) the number of people who died from non-accidental causes, cardiovascular diseases a.nd respiratory diseases in the USA every day from January 1987 to December 2000, which is derived from the Internet-Based Health and Air Pollution Surveillance System of John Hopkins University and contains 29 cities….”
“…Nanjing University…”
Are you sure that’s not the Wuhan University?
This is certainly a possibility. Since China continues to build non-renewable power plants at a high rate, they are not falling for the conclusions of this “study.”.
Anthony,
This is a familiar problem with the climate alarmists. This goes back to at least 1990 and the work of climate audit group.
Thanks ofr bring up the issue!
To all the Australian posters here my condolences on the horrible tragedy at Bondi Beach. Also for the abysmal PM who over looks the real problem and wants more gun control. I fear you will suffer more incidents like this.
Control of ideologies / religions would be far more effective.
Just one religion. The Amish have harmed no one.
Ve moost contruel de eedeeologees! Thought police to the rescue. Scrub their brains, scrub their brains!!!
Story: More climate legal shenanigans with a bit part by everyone’s favourite climate crusader.
Climate warrior Mann co-authors work funded in part by attorney with interest in key climate suit | Just The News
“It is an unusual alliance in the world of medicine that some ethics experts say blurs ethical lines. This is particularly true when doctors refer patients to attorneys who provide financial support for their medical research,” the Journal reported.
The excerpt above was from the article sited in Charlie’s post. I would add that the ethical lines were not simply blurred. They were totally erased.
Mann’s attorney paid for research which supported the attorney’s case and would earn the attorney a substantial contingency fee if he prevailed. The courts are taking a dim view of attorneys’ funding supposedly scientific research which supports their cases.
If that isn’t Ambulance chasing I don’t know what is.
We are warming the coldest places in the high latitudes of the Northern Hemisphere the most, especially during the coldest times of year. Warming the lower latitudes by much less. This has decreased the meridional temperature gradient and has DECREASED the temperature swings from fronts that cause the biggest changes.
We have warmed the nights much more than the days which has DECREASED the diurnal temperature changes.
This is based on the physics of greenhouse has warming that includes models that are using legit physics and meteorological principles.
This is based on all the observations and empirical data of the real world since the mostly beneficial, modest global warming started.
And this indisputable law will continue to rule as our climate OPTIMUM for life continues.
The warmer nights are mostly in UHI impacted areas.
“across low- to mid-latitude regions”
But, CO2 is well mixed in the atmospheres, so this affect should be all latitudes.
Forgot the /s
Thanks, Sparta!
True that CO2 is well mixed in the global atmosphere.
However, one of the most interesting facts about CO2 and H2O as greenhouse gases is that some of their radiation absorption bands overlap and there is a maximum amount of radiation that can be absorbed at each band.
Turns out that in warm, HUMID places some of those bands are already saturated by H2O absorption. It doesn’t matter how much CO2 you add in those regions, there’s no more radiation in those saturated bands left to absorb!
That’s one of the reasons the coldest, driest regions of the northern hemisphere are warming several times faster than the lowered latitudes.
This is also why the warming is much greater in those locations at the coldest, driest times of year. Hard to call warming the most dangerously frigid air masses the most, anything but beneficial to most life on this planet.
Deserts, because of the dry air, have not saturated the long wave absorption bands from H2O and have an elevated warming from the increase in CO2, especially at night.
The tropics have the least warming for the same reason.
The result has been to reduce the meridional temperature gradient.
Cold fronts have less extreme intensity. A weaker temperature gradient means slightly weaker jet streams and less violent tornado numbers in areas that experience that type of weather.
Observations confirm around a 40% drop in the most violent tornadoes.
Actual REPORTS for ALL tornadoes have increased the past 3 decades because of technology.
The term “greenhouse gas” is basically scientific nonsense.
Scientifically, H2O and CO2 in the atmosphere are “radiatively active” gases due to their atomic structure.
Only H2O acts remotely like a greenhouse, in that it slows the natural convection rate.
CO2 has no effect on the natural convection rate, and any possible theoretical radiative effect is so small and so overwhelmed by natural air movement, as to be totally irrelevant.
You can’t make thermometers hotter by adding either CO2, H2O, or any other gas at the same temperature to air.
Sad fact of life, I guess, that GHG believers just refuse to accept. Ignorant, gullible, or just quite mad – who knows?
H2O has a molecular dipole moment due to its structure and interacts with EM fields gaining kinetic energy. It is this phenomenon that is the basis for radar ranges (aka microwave ovens).
CO2 does not have a molecular dipole moment. The dipole moments formed by the C-O bonds are in equal and opposite directions and cancel at the molecular level.
CO2 has no effect on the natural convection rate, and any possible theoretical radiative effect is so small and so overwhelmed by natural air movement, as to be totally irrelevant.
+++++++++++
bnice,
This is completely ignoring the indisputable atmospheric physics and science. Not just theories but proven with all studies using empirical data/observations.
“Decadal variation of longwave downwelling and net radiation as observed at the surface with implication for climate sensitivity: Based on pyrgeometer and pyranometer measurements”
https://www.sciencedirect.com/science/article/pii/S2950630124000036
I’m not sure why this idea that CO2 is not contributing to the warming has spread this anti-science bunk to so many people but it destroys all credibility from those that profess this as science.
The authentic science also shows us that the modest warming from CO2 is BENEFICIAL warming for most life on our planet(that would be ok with a little more) and CO2 increasing to DOUBLE the current amount would be massively beneficial during the current climate OPTIMUM.
However, make no mistake, at least half of the warming the past 150 years has come from the increase in CO2 from around 290 ppm to the current amount of near 430 ppm.
A single paper. It correlates downwelling IR to increasing temperature and claims the cause of the downwelling IR is due to CO2. It is laced with unstated assumptions.
It ignores the sun, treating solar EM as a constant.
It is an interesting study, but not nearly as conclusive or authentic as you claim.
Any change in atmospheric temperature due to CO2 is the changing concentration chances the specific heat capacity (Cp) which is also affected by a variety of other factors.
Claiming CO2 is responsible for xx% is not defensible given the number of know factors added to what we do not know.
You are, in effect, defending CO2 as the “control knob.”
Thanks, Sparta!
What you stated doesn’t even make sense.
This study was an actual measurement of the real changes from the increase in greenhouse gases, SOLAR RADIATION and albedo using the actual empirical data.
In this study, they used pyranometers to measure the amount of short wave radiation from the sun(that you claimed they ignored) as well as pyrgeometers to measure the long wave radiation from greenhouse gases(mostly CO2 and H2O).
The increase in long wave radiation that they measured was the indisputable increase in long wave radiation from greenhouse gases.
Not maybe, not possibly but it was the scientifically, accurately measured, indisputable empirical data that measured the increase in (mostly) CO2 and H2O forcing.
This is what they found:
https://www.sciencedirect.com/science/article/pii/S2950630124000036
During the same period (1998–2021), the net radiation at the surface increased at all sites, except at the South pole. The average annual rate of net radiation was +0.312 Wm−2/a, for which longwave downwelling radiation, shortwave global radiation and the decrease in albedo contributed 61%, 30%, and 9%, respectively.
The accurately measured empirical data/observations are ALWAYS the metric that tells us conclusively what happened.
Not models or theories, THE EMPIRICAL DATA.
Authentic science does not dispute the accurately measured empirical data. If you want to believe in something else, it’s NOT authentic science.
+++++++++++
Every legit scientist, including skeptics ACCEPT the physics of greenhouse gas warming. There is disagreement on the exact forcing amount but the physical laws are irrefutable and the increase in CO2 is very likely contributing at least 50% towards the beneficial warming.
Here’s a handful of the very high credentialed and widely followed scientists with a small sample of their work with greenhouse gases, mainly CO2.
(I’m an atmospheric scientist but nobody cares about my work:
Death by GREENING!
https://www.marketforum.com/forum/topic/69258/)
++++++++++++++++++
ON CLIMATE SENSITIVITY
by Richard S. Lindzen, Ph.D.
with review assistance from Roy W. Spencer, Ph.D.
https://co2coalition.org/wp-content/uploads/2021/08/On-Climate-Sensitivity.pdf
A Critical Review of Impacts of Greenhouse
Gas Emissions on the U.S. Climate
https://www.energy.gov/sites/default/files/2025-07/DOE_Critical_Review_of_Impacts_of_GHG_Emissions_on_the_US_Climate_July_2025.pdf
What “increase” in temperature are you talking about? Tmin? Tmax? Both?
“modest warming from CO2” does nothing but show that using temperature as a proxy for “heat” doesn’t work. The higher the surface temperature goes the greater the heat loss from surface radiation.
*HEAT* is the metric that is of interest, not temperature. Yet climate science is absolutely adamant about not changing over to modeling heat instead of temperature. Ask yourself why that is. They’ve had the data for at 45 years to do enthalpy instead of temperature. By now we should have a complete set of literature focused on enthalpy. But we don’t. Hmmmmm……
Thanks, Tim!
I understand your point. However, science needs tools, methods and ways to express ideas that are practical, while at the same time not destroying the meaning and in fact assisting in the understanding.
We could decide to express H2O as how many molecules of H2O, which would be different for ice, liquid or vapor but it’s much easier to record liquid water in rain gauges and use a ruler to measure snow in inches……even though some snow, in warmer air has more molecules worth of H2O in it than other snows in colder air.
‘*HEAT* is the metric that is of interest, not temperature.’
From the point of view of classical thermodynamics, understanding the behaviour of the global climate system requires us to hold three metrics to be of primary interest, namely: Temperature, Enthalpy and Entropy. Without taking accurate coincidental measurements of all three of these fundamental variables at frequent and regular intervals, we don’t have a hope of being able to understand the system’s thermodynamic behaviour, let alone of being able to build and evolve predictive models of it. At the present time we are simply not technically capable of taking such measurements, even with satellites, and so it looks to me as though we are currently swimming in a boundless sea of possibilities in regard to the global climate system’s near-term, mid-term and long-term trajectories.
NASA has reports that show CO2 is not “well mixed” in the common sense definition of the word. There are swirls and eddies. CO2 is higher in urban areas than rural per measurements.
Well mixed infers an equilibrium state, which does not exist.
Thanks, Sparta!
I agree with that.
CO2 levels are seasonal too.
During the Northern Hemisphere’s Winters, with most plants in the mid/high latitudes dead or dormant, CO2 emissions EXCEED CO2 consumption by plants. So there is a surplus that accumulates and increases the concentration in the atmosphere.
During the growing season or the northern hemisphere, active plants and photosynthesis gobble up more than the emissions and we actually observe a DROP in the concentration but not below the amount that accumulated during the Winter.
The seasons in the Southern Hemisphere have less impact because that hemisphere is dominated by more water compared to the Northern Hemisphere with more land and plants.
Some of the CO2 is also sequestered in other ways, including being absorbed by the oceans but the indisputable law of photosynthesis which uses CO2 as the building block for all of life is what takes full advantage of these increasingly beneficial amounts of atmospheric CO2!
Meaning that parameterizing the concentration of CO2 in the climate models will *never* accurately represent reality of the Earth’s biosphere.
In winter in Canada, CO2 hibernates. In January in Winnipeg, the average temperature is -20° to -10°C. I would like to round up the greenie crowd for a winter vacation in Winnipeg. Two activities are really big: ice fishing and ice hockey.
COP 31 in Winnipeg in the winter? Fun times had by all 50,000.
From the above article:
“. . . approximately 96% of NOAA climate stations fail to meet the National Oceanic and Atmospheric Administration’s (NOAA) own siting requirements, and are corrupted by localized heat sources such as asphalt, rooftops, HVAC exhaust, machinery, or reflective surfaces.”
Hey, it’s not happening just in the USA with NOAA “climate stations”, but to an even worse degree in foreign countries with their temperature monitoring stations, most of which due to economic limitations cannot, and are not, sited away from UHI-affected locations.
When you do use data from a long term nearly pristine site, like say , Valentia…
…. you find that the average temperature in the 1930s and 40s was WARMER than the first two decades of this century.
From the MW dictionary of Geographical Names:
Valentia is island in SW County Kerry in the Atlantic S of entrance to Dingle Bay.
Where did you get temperature data for Valentia? You live in Oz. How did you know about Valentia?
“How did you know about Valentia?”
Research.
Valentia has actual data stretching way back, and the site and its surroundings are pretty much the same now as when it was first installed… a rare site indeed 😉
Its proximity to the UK makes it a good reference point for CET and the Met Office farce.
The actual data in the graph below comes from a study by a guy at the Irish Met where the compared Tmin, Tmax averages (blue) to temperatures taken hourly and averaged (red).
They started taking the hourly measurements in 1944, iirc
Hourly average almost always ends up above Tmin/Tmax average
Even looking at the graph, you can see that the general temperature during the 1930s, 40s was higher than the first two decades of this century.
(Tmax + Tmin)/2 is always more than 10% too low.
Hourly data creates a piecewise integral approximation that is much more accurate.
So , historically, if you change from using (Tmax + Tmin)/2 to using hourly based average (or less)…
… you create an artificial warming trend.
Here is a really good link to that article.
“Comparing temperatures: past and present.” Some quality data analysis from an interesting angle. | Tallbloke’s Talkshop
This has made my concerned that the warmists do not want to exclude data from non-pristine (highly UHI impacted) sites, since it would make their arguments fade away.
Not only don’t they exclude it.. They make it the aim of there adjustments.
I bet the alarmists are annoyed at USCRN.. It controls how much warming they can fake in the USA.
No warming except for a minor step at the 2016 El Nino, which is used by ruler monkeys to show a trend..
NON-CO2 El Ninos.. its all they have to show warming
… .. and they keep proving that fact. 🙂
Ummmmm . . . would “the average temperature in the 1930s and 40s” be for Valentia, for all of the UK, for all of Europe, for all of the Northern Hemisphere, or fall all of Earth?
And why talk about just “the 1930s and 40s” compared to just “the first two decades of this century” when current Quaternary Period ice age began some 2.3 million years ago and, moreover, considering that the current Holocene interglacial (warm) period began only some 12,000 or so years ago but is expected to last another 20,000 to 30,000 years based on historical precedents?
New word claimed for cause: “fingerprinting”
RIP fingerprints, it was a good run in crime investigation.
Given the ongoing climate warfare, introducing fingerprints could be a deliberate tactic.
From the article: “Phys.org’s press release asserts that “extreme day-to-day temperature changes have become more frequent and intense across low- to mid-latitude regions,” with researchers’ “optimal fingerprinting” methods confirming greenhouse gases as the primary cause.”
This is just a blatant lie. They haven’t proved anything. It’s all speculation and assumptions.
Yet the “global warming” meme would have us believe that warmer temperatures will rise faster than hot temperatures, thus reducing the the day to day temperature changes.
They really cannot keep their little fairy-tale fantasy consistent, can they . 😉
Read in depth, it is modelling.
Very nice Anthony. More proof that these guys have nothing they have lost, it is an ugly thing to see highly educated people compromise their values for a little notoriety and money.
I’d say we still do not understand climate variations before we can have an explanation for the Little Ice Age. This event (100s of years) happened in the peak of an interglacial warm period! We are now obviously warming up again from LIA. And this recovery started long before any CO2 emissions to speak of. Can we at least agree on the facts? Then try to find some explanations based on these facts, and not from dogmatic crony politics…
To paraphrase the Clinton campaign, “It’s the sun, stupid.”
Before one can even start to evaluate an energy system, one needs precise, accurate information of the energy source.
The energy source for the earth’s energy systems is the sun.
There are other celestial phenomena that contribute, but primarily it is the sun, which dominates.
So climate change is not only bringing us hydrological whipsaws but also thermal whipsaws. How did I make it to 62 years old?!
I bet any $$ that the day to day volitility in max temperatures has not increased in the last 100 years using real station observations instead of their modeled fantasy land.
For info on daily Tmax and Tmin temperature data, go to:
https://www.extremeweatherwatch.com/cities/adelaide/average-temperature-by-year
From first page, page down to “2025”
Tmax and Tmin data is displayed for every day. Data for each month is available.
This study has as much legitimacy as the one that says climate change is costing me 2 minutes of sleep per night.
This is false and it should be corrected. Statements like this are why WUWT is not taken seriously by mainstream science institutions.
Just follow the link (also available from the WUWT USCRN side panel) and check the trends for yourself and you’ll see this.
Since USCRN started in Jan 2005, it has a warming trend of +0.46C (+0.82F) per decade (to Nov 2025). This compares to the adjusted ClimDiv trend over the same period of +0.38C (0.69F) per decade.
The exact opposite to the comment cited above states!
Even over their relatively short period of common measurement the trend lines are now noticeably diverging, with the sites that “avoid artificial heat sources by design” warming at a faster rate than those that are not ideally sited and which are therefore adjusted to compensate!
So, if anything, it seems that the adjustments are introducing a slight cooling bias.
OMG stop this stupidity. !!!
Climdiv is adjusted TO MATCH PRISTINE SITES.
The USCRN sites ARE NOT WARMING FASTER.
The only difference between ClimDiv and USCRN is that the algorithm used to adjust ClimDiv has gradually been improved.
There is statistically no difference between the two, but the fact is that the best-estimate trend in the so-called ‘pristine’ one is warming slightly faster than that of the adjusted one.
The exact opposite of the false claim made in this article.
USCRN is warming even faster than the global land average over the same period (+0.43C per decade). So much for UHI causing all the warming!
You are truly clueless, aren’t you.
Comparing fake, not-real data “adjusted” to match, with actual real data, shows absolutely nothing except how well they have carried out the fake adjustments.
ClimDiv is a totally superfluous and meaningless series, the only thing it shows is that they can take whatever dubious data they want, and make it sort-of match whatever the want.. SO WHAT !!!
There is no warming in USCRN except for a slight step at the 2016 El Nino
It appears that you have done a simple linear regression on the time series. Have you done any time series analysis at all. Try this site for basic information, especially since you are trying to forecast an increase in temperature.
https://www.statology.org/the-definitive-introduction-to-time-series-analysis/
I would also recommend using separate Tmax and Tmin so you can show what is actually warming.
Linear regression is a type of time series analysis. It’s the one Roy Spencer, for example, uses in his monthly warming rate updates for UAH. Don’t see too many complaints from you about that.
“… especially since you are trying to forecast an increase in temperature.“
No I’m not. Where did I say that? I just pointed out that Anthony Watts is wrong when he sates in this article that USCRN “…shows smaller warming trends than the older, urban-contaminated networks.”
USCRN shows faster warming than the adjusted sites. It’s right there in front of your eyes!
Now know that you know the exact opposite of Watts says in this article is true, are any of you self-described ‘skeptics’ going to challenge his misinformation, or are you really all just cultists at heart?
Who cares? Adding CO2 to air doesn’t make thermometers hotter – not even a tiny bit.
Thermometers respond to heat. Measure away, adjust as much or as little as you like. A complete waste of time and effort, but that’s the nature of OCD.
Here are 3 graphs of CRN. Where exactly is warming taking place?
Sorry I am late on this discussion but emotions are raw over Sunday’s terrorist attack on innocent Jewish families celebrating Hanukkah at Bondi Beach. Of the many victims, there were a few Holocaust survivors and a 10 year old girl. Condolences to the victims and their families.
Back to the subject at hand: this is my example of the UHI effect of mega city Sydney compared to the populous coastal cities of Newcastle, located north, and, Wollongong, located south. Graphs are plotted from the temperature data published on the BoM website.
The red trend line is the BoM’s estimate of the land ‘mean temperature’ warming over all Australia since 1910. Note my plot is of mean max summer temperatures (Dec/Jan/Feb.)
Make of it what you will.
Nice graph. It pretty much shows UHI as a factor that can be “spread out by averaging.
Today, people look from their windows and can see that ‘global warming’ is a lie. Next these brainwashed climate idiots will be telling us that ‘global warming’ is causing global cooling. Oh wait…they already did that.
We were told back in the 1970’s that a cooler period is coming and we are now seeing that being slowly realised. Mother Nature’s clock doesn’t operate on human time.
But Greta can see CO2. She said so.
Not a single climate classification of any location in the US, including Alaska, has been changed in the past 70 years. Hardiness zones are the same as in the 50’s.
What part of climate has changed?