Guest Essay by Larry Hamlin
NOAA has updated its Global Time Series Average Temperature Anomaly data through May 2024 with the results clearly indicating that the year 2023/2024 El Nino event continues to weaken as demonstrated by the data presented and discussed below.
The NOAA data presented below uses a graph display interval of a 30-year period from January 1995 through May 2024 to allow greater visibility of the monthly changes occurring during this most recent climate data interval.
The NOAA Global Land and Ocean average temperature anomaly data is shown below with both graph and table formats.
The NOAA Global Land and Ocean average temperature anomaly through May 2024 has further declined from the November 2023 peak EL Nino value of 1.43 degrees C to 1.18 degrees C (338th out of 353 measured values) with this outcome also being below the April 2024 result of 1.30 degrees C (345th of 353 measured values) indicating this most recent El Nino event continues to weaken.
The NOAA Global Land updated average temperature anomaly data is shown below in both graph and table format.
The Global Land average temperature anomaly through May 2024 has further declined from the prior February 2016 El Nino peak value of 2.53 degrees C (the highest ever measured NOAA Global Land anomaly value) to 1.63 degrees C (316th out of 353 measured values) with this outcome also below the April 2024 result of 1.93 degrees C (336th out of 353 measured values) clearly showing that this most recent El Nino event continues to weaken.
This latest NOAA Global Land average temperature anomaly data continues to confirm that Earth’s 8+ billion people (that reside on land) have experienced declining Global average temperature anomaly outcomes since the February 2016 El Nino peak value (that occurred over 8 years ago) indicating that humanity is not in a climate crisis.
This outcome establishes that climate alarmist hype claiming that the Earth is facing a “climate emergency” is unsupported by NOAA’s Global Land average temperature anomaly measured data as presented above in NOAA’s graph and table data values.
There are many other NOAA Global region average temperature anomaly measured data results that also establish that peak measured Global region average temperature anomaly values occurred years ago.
NOAA’s updated Northern Hemisphere Land average temperature anomaly measured data is shown below in both graph and table format.
NOAA’s data for the Northern Hemisphere Land region has a peak average temperature anomaly value that occurred in February 2016 at 3.17 degrees C (during the prior El Nino event) with the latest May 2024 anomaly value measured far below the prior peak value at 1.79 degrees C (293rd highest value of 353 measured values) which is also below the April 2024 measurement value of 2.49 degrees C (344th highest value of 353 measured values).
NOAA’s updated region measurement data for the average temperature anomaly for Asia is shown below in both graph and table formats.
NOAA’s data shows that the May 2024 average temperature anomaly value of 1.57 degrees C (242nd of 353 measured values) is far below the Asia region peak average temperature anomaly value of 4.11 degrees C measured in February 2020. as well as the April 2024 value of 2.65 degrees C (331st of 353 measured values).
NOAA’s updated average temperature anomaly for the Oceania region measured data is shown below in both graph and table formats.
NOAA’s data shows that the May 2024 average temperature anomaly value of 1.29 degrees C (303rd out of 353 measured values) is well below the Oceania peak average temperature anomaly result of 2.21 degrees C in December 2019.
NOAA’s updated average temperature anomaly data for the East N. Pacific region measured outcomes are shown below in both graph and table format.
NOAA’s data shows that the May 2024 average temperature anomaly value of 0.80 degrees C (271st out of 353 measured values) is well below the peak East N Pacific average temperature anomaly result of 1.79 degrees C in October 2015.
NOAA’s updated average temperature anomaly data for the Hawaiian region measured outcomes are shown below in both graph and table format.
NOAA’s data shows that the May 2024 average temperature anomaly value of 0.37 degrees C (172nd out of 353 measured values) is well below the Hawaiian region peak average temperature anomaly result of 1.76 degrees C in September 2015.
NOAA’s updated average temperature anomaly data for the Arctic region measured outcomes are shown below in both graph and table format.
NOAA’s data shows that the May 2024 average temperature anomaly value of 1.79 degrees C (224th out of 353 measured values) is well below the Arctic region peak average temperature anomaly result of 5.00 degrees C in January 2016 as well as below the April 2024 value of 2.57 degrees C (287th out of 353 measured values).
NOAA’s updated average temperature anomaly data for the Antarctic region measured values are shown below in both graph and table format.
NOAA’s data shows that the May 2024 average temperature anomaly value of 0.55 degrees C (265th out of 353 measured values) is well below the Antarctic peak average temperature anomaly result of 2.25 degrees C in August of 1996 nearly 3 decades ago.
Additionally, NOAA’s updated USCRN (a state-of-the-art accurate surface temperature network free of localized heat biases addressed here) Maximum Temperature Anomaly data for the Contiguous U.S. through May 2024 (shown below) demonstrates there is no established upward temperature anomaly trend since at least the year 2005.
Furthermore, the peak May maximum temperature anomaly in the U.S. occurred in May 1934 at 5.66 degrees F versus 1.22 degrees F in May 2024 (highlighted in red) as shown above.
The latest NOAA Global Times Series average temperature anomaly data (updated through May 2024) as well as NOAA’s latest USCRN Contiguous U.S. anomaly data (also updated through May 2024) do not support and in fact contradict climate alarmists flawed claims that the Earth is experiencing a climate emergency.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.




















All “How many angels can dance n the head of a pin”. Sparse data from stations not fit for service run through algorithms not fit for service that average an intensive property.
That cloud looks like Snoopy.
How many angels can dance on the head of Al Gore’s ….?
” the year 2023/2024 El Nino event continues to weaken as demonstrated by the data presented and discussed below”
Yes, it does. But globally May was still by far the hottest May in their record, and caps a full year of record breaking months. You can see this history in detail in this stacked graph of global anomalies, where each month is shown with the color for its year (2024 is dark blue). May in fact exceeded its record by the most of months in 2024. It’s getting hotter.
“It’s getting hotter” 🙀 it depends what you mean by “hotter”.
I like summer. Days are longer and I don’t have to wear a lot of clothes outside. Plus, in mid-summer, my tomato plants bear fruit.
Nighttime temperature should stay above 50°F (10C) for tomatoes to set fruit.
I no longer try to grow them. Sunday morning dipped to 39°F (4C). Location is east of the Cascade Crest in WA State at 2,240 feet.
Plant them under the dryer vent.
I’m looking forward to making green tomato chutney for the winter come September
I adore tomato plants too- what being Italian American- and also my 7 fig trees originating from one brought to America by a grandfather from Abruzzo, Italy. This summer they’re growing nicer than ever- thanks to lots of warm weather, lots of rain, and more CO2 to feed them. Yet, every day, I read/see in the Wokeachusetts media, that we’re having an EMERGENCY. I keep looking around- and can’t see it. It must be out there somewhere since almost everyone in the state believes it. I dunno why I can’t see it!
We should appreciate the thawing from the Little Ice Age that nature has bestowed upon us.
OOOO…. pretty colours. !!
Showing the very solid and extended El Nino that we are all very well aware of.
Now.. where’s that evidence of human causation ??
Not getting “hotter” in most parts of the world, Nick.
The Arctic regions are showing areas of slight warming as expected as the planet slowly, inexorably moves out of the current ice age.
For example, professor Cliff Mass (resident meteorologist at the Washington State University in Seattle) calculates that the Pacific North West regional climate has changed by just 1 – 2 degrees F since 1900.
He ascribes some element of this to human activities such as land clearing, urban development, etc.
Minor nit but Cliff is professor at University of Washington, in Seattle.
WSU is on the East side of the state in Pullman.
Both are fine public state universities.
He specializes in weather of the North American Pacific NW, has a blog, and has written books on that topic. He’s also posted here at WUWT.
Man it was warm today. Almost like spring is over and it’s summer!
Man it was cold here yesterday, almost like autumn has passed, and we are now into winter 🙂
Well, it’s 9.00am here on the coast in Queensland and it’s 9C. I sincerely hope it gets hotter today!
Coldest morning in 3 years in Melbourne. (4C) Tomorrow morning will be 2 degrees colder…..
(are you in charge of the recorded data? mebbe a few years from now it will be 1 degree warmer tomorrow)
Or 1 degree colder.
Nick,
Are you ready yet to describe the mechanisms by which the “well-mixed gas CO”2 exerts such a patchy heating (if indeed it does with some pyhsics models)?
As time passes. there is less ans less observation and measurement support for the Establishment global warming model, it seems to me.
Geoff S
Nick only cares about what supports the narrative, that’s his job. BTW, almost snowed here in Calgary last night, did snow to the west and when it clears we will see the Rockies with a new covering of white. I certainly hope it warms before we all die.
Frost warnings for outside the city too. Calgary Herald tells us about “the drought” even though rainfall has been normal, and the snowmelt from the mountains is a couple of weeks away….
Some kookie billionaire sends them an electronic funds transfer for that bullshit.
It doesn’t. The patchiness is caused by the other agents that perturb the ingress and egress of energy in and out of the atmosphere. Increases/decreases of CO2 do not turn off those other agents. The concept is the same as with solar output. Nobody seriously rejects the idea that solar output can effect the temperature of the atmosphere even though there is no expectation that the change in atmospheric temperature will be spatially homogenous when solar output changes.
Have you considered that multiple factors influence the temperature of the atmosphere and that factors can constructively or destructively interfere?
#3…
Thats the wackiest climate chart I’ve seen in years.
True, but the delta for May was less than those from July to December of 2023 …
.
I have a different approach to this (very) interesting phenomenon, as shown in the attached graph.
Notes
1) For historical reasons the “NOAA” dataset in the ATL article is labelled “NCEI” by me.
2) The (lower troposphere) satellite monthly records started being set later than the surface ones, from July 2023 rather than May (HadCRUT5, “Non-infilled” version) or June (all other surface datasets).
3a) Whatever is happening, it is still ongoing. It is much too early to be drawing “definitive conclusions” about anything here.
3b) Looking at the datasets it is very likely that new records will be set for June 2024 across the board. After that things get “fuzzy”, to put it mildly …
I’ll keep trumpeting the correct interpretation of these ΔT’s. They do not indicate what baseline temperatures were when they were calculated. Look at the Antarctic of 1°C. Does anyone really think 1 degree at some -30 degrees is a problem? How about the Arctic at 5.0°C ΔT? Does anyone think that this resulted in a balmy Mediterranean sunbathing month?
Averaging ΔT’s that are not based on a common baseline doesn’t really tell anything about the globe as a whole. The assumption that averaging all the local ΔT’s gives an accurate depiction of temperature growth on the globe is misplaced. What happens is that the tails of the distribution become the determining factor that drives the mean.
As i have been looking at animations of the ΔT’s on the globe, I see warm spots and cool spots moving around. That is weather, not necessarily a warming globe. That tells me that the current implementation of calculating a global ΔT has a problem.
And what are the ranges of the real air temperatures in all those plots?
“It’s getting hotter.”
No it isn’t. If the temperature goes up a fraction of a degree it isn’t getting hotter. It’s getting warmer.
We’re talking about global warming. The clue is in the name.
The modern global warming, according to the probably unreliable officiel estimates, is a bit over one degree C. If you turned up the thermostat by one degree you probably wouldn’t notice the difference. So how can something most people wouldn’t even notice be called “heating”?
Sorry, Nick, but I can’t take you seriously as long as you keep up this alarmist nonsense.
Chris
If you turn the global thermostat down by about 6 degrees the planet would be in a glacial period. These changes in temperature that might seem small to you personally are quite significant at a planetary scale.
You first must prove that everywhere on the planet is warming at a fractional rate. You then must prove it is Tmax or Tmin that is warming. A warming Tmin has a much different implication than a rising Tmax.
How are those things germane to the point I made above? Be specific and thorough in your explanation, and demonstrating that you actually understand the comment you responded to would be helpful.
They are important because the starting point of calculating a ΔT in climate science is a daily average. A daily average merges the change in Tmax versus Tmin such that you can’t tell what is changing.
It is important because when you look at animations of global ΔT’s, you see hot spots moving around. That is weather, not climate. It is why there has been almost no change in climate designations on the planet due to temperature. The Sahara Desert is still desert. The Arctic is still the Arctic and hasn’t changed to temperate. Show us how far in latitude the coarse climate zones like tropics, temperate, polar, etc. have moved.
cwright above stated that a human could not feel a 1 degree C change in temperature, and therefore it must not be climatologically important. I replied by saying that while the difference might be small to a human, climatologically it is quite significant, as the difference between glacial/interglacial states is only 6-7 degrees C.
How does your point have anything to do with this observation? Can you explain?
It is important to understand that each of those ΔT values represent a random variable that has a variance. A ΔT from a colder temp area typically has a larger variance than one from a warmer area. You can’t simply assume that a ΔT value is a single, accurate, true value that stands on its own. When comparing ΔT values you must know the variance that goes with each random variable in order to adequately judge its uncertainty or, in other words, the possible values it could reasonably be assigned.
When comparing ΔT values you must also do so using relative values, meaning you must know the baseline from which the ΔT value is being calculated.
A 1C change at 20C *is* different than a 1C change at 0C, both subjectively (how does it “feel” to a human being) as well as climatologically. Grains, wheat or corn for instance, see a change from -1C to +1C very differently than they do a change from 19C to 21C as far as growth and survivability is concerned. Considering only the absolute value of the ΔT makes no sense in the real world unless you can judge its impact, it is its impact that is important, not its absolute value.
We are talking about the global anomaly, with reference to the preindustrial baseline. At 6-7 degrees cooler than the preindustrial baseline the earth was experiencing a glacial period. 1 degree from the preindustrial baseline is climatologically significant.
I appreciate your attempt at clearing up your twin’s confusing comment, at least.
Climatologically significant WHERE? You don’t seem to understand that an average tells you nothing about minimums or maximums. All of that data is lost when you take an average!
We are *NOT* in a glacial period today. A 1C change between today and a glacial period is meaningless.
For the entire globe. When the global mean anomaly is 6-7 degrees lower than the baseline, the planet will be experiencing glacial conditions. The global mean anomaly tracks this perfectly.
Not even close, the planet is much too warm, and getting warmer. Those single-digit temperature changes are quite climatologically significant.
Salute!
Nice stats, but PLZ try getting away from the anomalies!
If we want to see a comparison with past values, maybe a st line of an average from some period and use actual recorded values on top of the plot. From analyzing hundreds of test flight data finding a trend from raw data was quite simple in our office, even by the clerk/typists that produced the viz-aid slides to brief the high rollers., and same for the outlier data points that did not have the same time-referenced data points we saw from other sensors.
Gums sends…
Just let me check that….
I make the rate of warming since 2005, + 0.40 ± 0.28°C / decade. Statistically significant, though that’s not correcting for auto regression.
You must be a Zero Sigma green belt.
Lets’ check what bellboy’s graph actually hides.
He loves to use El Nino warming events, because it is all he has.
So which is it? Is there no upward temperature trend since at least 2005, or is there an established trend caused entirely by these ever increasing El Niños? Do you want to use the “state-of-the-art accurate surface temperature network free of localized heat biases” and only look at maximum temperatures, or do you want to the satellite data that every keeps telling me is highly inaccurate,and only use mean temperatures?
One day I’ll actually be able to point out a false claim using the very data that’s been used to make that claim, without being accused of trying to hide something by not using different data.
Using your cherry-picked time frame, of January 2017 – April 2023, the trend is actually 0.17°C / decade for Maximum temperature, which is of course entirely meaningless given such a short time scale and high amount of variability. The uncertainty is ±1.53°C / decade.
You obviously can’t read the graphs I posted.
No trend to 2016, then a bulge , then cooling from 2017.
Notice yet again that you are totally devoid of any evidence of human causation.
Meaning that all your child-like yapping is about totally natural variability.
—-
“which is of course entirely meaningless”
Just like everything else you post.
Except there was warming in maximum temperatures according to CRN.
I see you ignored the evidence I gave the last time you were whining about no evidence. It was a day ago, so you probably forgot.
And then producing a massive tampered fake URBAN fabrication
You are a JOKE, bellboy !!
And your previous effort was load of cobble-together nonsense, based on assumptions and models.
Then a load of sites with massive URBAN warming, that we all know about.
Urban warming is NOT global warming.
Models are NOT scientific evidence, especially not one with ASSumptions built in.
ASSumptions are NOT scientific evidence.
Apart from URBAN warming, you have absolutely no real scientific evidence… PERIOD. !!
Certainly none of warming by atmospheric CO2…
There is no evidence of human warming in the UAH satellite data.
You remain, as always… Empty, NIL.. VOID !!!
And the rants continue. Demands evidence. Keeps claiming I give him no evidence, then rejects the evidence when presented.
As far as I can see he demands evidence but not if that involves any actual data. Any century long data will be rejected because he doesn’t like the data.
I can show similar work using just satellite data, but then you have the problem that both temperatures and CO2 have been increasing in a linear fashion over the last 40 years.
“ASSumptions are NOT scientific evidence.”
Says the person who just assumes all warming can be explained by El Niños just magically causing a permanent step change in global temperatures. Who assumes that all the warming in every surface data set can be explained by urban warming which just accidentally look the same as the ruse in CO2.
All evidence will have assumptions. Making juvenile jokes about the spelling doesn’t mean assumptions are wrong.
No physical uncertainty bounds. Meaningless.
There never are in Climare “Science”.
Not sure what you mean by that. I’m not claiming any sort of proof from that simple model. Just responding to the claim that there is zero evidence of CO2 causing warming.
“I’m not claiming any sort of proof from that simple model.”
So you produced ZERO evidence.. make no claim of any sort of proof..
… especially not from a simplistic suppository driven model….
And expect people to accept that as evidence ?????
You really should read what you are typing.
You are making a total fool of yourself.
Try using a dictionary. Proof and evidence are different things.
I’ve asked you before what you would consider evidence that I creasing CO2 caused warming. So far crickets. It’s obvious that no evidence will ever be acceptable to you. You will just deny any link as an article of faith.
You have produced no evidence..
Just mindless bluster.
You have made that VERY clear to everybody
If you don’t know what scientific evidence is, that is not my fault.
You still have to provide the evidence, you numpty
I mean your chart is physically meaningless.
Why? I’m quite prepared to accept I’ve made a mistake, but would prefer it if you explained what in your opinion makes it “physically meaningless” rather than just accept it on your say so.
You know why.
I take it you can’t or won’t say why you think it’s physically meaningless.
It’s because the delta in values on your chart is about 0.4C. While the uncertainty in the anomalies is at least 1.5C and probably more like 15C. In other words you can’t physically discern differences in the temperatures at the level of the tenths digit. The actual value of the anomalies are part of the GREAT UNKNOWN.
As we keep pointing out to you a trend line that goes from +0.8C down to -0.4C is physically just as possible as the one from about -0.4C to +0.8C as shown on your chart. YOU SIMPLY DON’T KNOW! It’s all part of the GREAT UNKNOWN.
“uncertainty in the anomalies is at least 1.5C and probably more like 15C”
Talk about physically impossible models.
And you know this, how, Herr Doktor Chief Trendologist?
Amazing. Karli will say I’m arrogent for expecting him to justify his vague assertions. Yet the demands I answer him whilst he uses these childish insults.
The question is why do I think it’s impossible that the world could be 15°C warmer last year than it was in the 1950s. This is based on the assumption we are using the GUM definition of uncertainty, a value that describes a range of values in which it is reasonable to attribute a value to the measurement.
Fair enough. I can’t prove that’s physically impossible, just as no one can explain why a sensitivity of 2.4°C is physically impossible.but it seems very unlikely. For a start a change if 15 over a year or a century would require some very strong forcing. I suspect it would be an extinction level event. If the data is to be believed annusl averages have not changed by more than a degree or so over the last century. The ice ages were not 15°C colder than today.
If there had been a year in recent times that was that warm or cold I would have expected it to not pass without comment. Given that I don’t think it’s reasonable to believe this year was 15°C warmer it colder thanaby other yearon the last few hundred. And hence my pont a out the GUM definition, and the use of the word “reasonable”. An interval of ±15°C is just not reasonable even if you had no confidence in the thermometer readings.
Secondly, it’s an absurdity to think that 1000s of instrument readings could contrive to give a global average that was 15°C out. It would be pretty incredible for there to be many daily readings that had that big an error. For the annual global temperature to be out by that much would require every single thermometer in the world to have that size if error every day for a year, and for all the errors to be in the same direction. No thing’s impossible but it’s so improbable as to be virtually impossible. And even if it did happen you would have to assume that somebody would notice.
Of course, what karlo et al want to do is ignore the GUM definition of uncertainty, with all it’s reliance on statistics and probability, and instead have invented the concept of uncertainty as a “zone of ignorance”, or The Great Unkown. This is a magical place where the usual rules of probability no longer apply and anything that can possibly happen will happen, or at least is as likely to happen as anything else.
Even then I’m not sure how you get to 15°C of uncertainty. Pat uses ignorance to claim that the correct uncertainty is the worst case. The average of a million readings each with an ignorance zone of ±2°C will have an uncertainty of ±2°C. Because it’s always possible that every reading will be at the same edge of the interval, and as long as we worship ignorance any attempt to refine that interval is heretical. But even Pat only keeps his ignorance to around ±2°C. It will take an overabundance of ignorance to get that to ±15°C.
Nice rant.
Error is not uncertainty.
Why do you quote the GUM, you don’t believe what it says.
If you didn’t want an answer you shouldn’t have demanded am answers. As expected you ignored it, and obviously avoided the part where I addressed your non probability based fantasy.
/plonk/
It isn’t. Just look at the range of temperatures between the poles and the tropics.
You want to discuss statistics, what is the possible standard deviation of a distribution with a range from +90 to -50? How uncertain is an anomaly of 0.01 per year when the standard deviation is so large? Where does the precision arise to allow those values with a distribution with such a range.
Don’t tell me that anomalies have their own uncertainty based on their small values. They should inherit the uncertainties of the random variables used to calculate them.
He won’t listen.
bellman and climate science don’t believe variance is a metric for uncertainty of the average. For them standard deviation of the data is meaningless. It’s the sampling error that is of primary importance.
It’s a metric for uncertainty, but not for the uncertainty of the average. I’m really not sure why you think it would be.
You really never seem to engage with what an average is. You never understand that an average is more stable than the sum of it’s parts. That there will be less variation in the mean of a month or a year than there will be in the individual days.
The standard deviation of global temperatures tells you how much variation you will expect to see if you are beamed down to a random point on the surface.
Because it is an interval that provides the dispersion of measurement observations that can be attributed to the measurand.
Nobody cares about how accurately a proper sampling might estimate the μ of the population because you are DEALING WITH A POPULATION. The standard deviation tells you the dispersion of measurement observations that can be attributed to the measurand.
Read GUM C.3.3.
“Because it is an interval that provides the dispersion of measurement observations that can be attributed to the measurand.”
But not an interval that provides a range of values that can reasonably be attributed to the measurand. The fact you constantly have to rewrite the GUM definition just to prove your point is telling.
“Nobody cares about how accurately a proper sampling might estimate the μ of the population…”
You might not care. But if you want to talk about the uncertainty of the global average that’s exactly what you need. The μ of the population is what I’m interested in, how accurate the measurement of it is what I would mean by the uncertainty of the measurement. If you tell me this year the global average is cooler than last year, want to know how certain you can be that it is actually cooler, and that depends on how certain the estimate of the mean is.
Using your logic, the global average next year could be two degrees colder than this year, but you would insist that we can’t actually know it’s colder until it’s dropped 30 degrees. And the only reason you are claiming that uncertainty is because some parts of the earth are much colder than other part.
“The standard deviation tells you the dispersion of measurement observations that can be attributed to the measurand.”
Again, stop making up your own definitions. You are confusing the observations made to construct the measurand, with the observation of the measurand. You are effectively saying that becasue it’s reasonable that some parts of the earth are below zero, then it’s also reasonable to attribute a value of zero to the global mean.
Strawman extraordinare!
You are not an expert in metrology yet you never once cite a resource to support any of your assertions
Really?
Me:
The GUM
Now explain how these differ from what I wrote!
Your only problem is that you can’t find resources to prove your assertions and are frustrated.
If you would like some references, maybe you should read the NIST Engineering Statistical Handbook or go to isobudgets.com.
Here is what Wikipedia says, and no, I didn’t write it.
It is pretty simple really, forget all you know about populations, samples, standard error, sample size. Think about how you tell someone what a measured value is and how dispersed your observations were when measuring some physical quantity.
“Your only problem is that you can’t find resources to prove your assertions and are frustrated.”
Try the paragraph immediately before c.3.3.
The series of ΔT’s ARE the ONE and ONLY population there is.
You insist on not defining the measurand and what measurements are used to define the measurand. Instead you are only interested in the statistical calculations you can do.
If the Global Average ΔT is the measurand, then the population of ΔT’s contained in the random variable below,
ΔT_global_average(ΔT_1, …, ΔT_n),
is all you have.
You do NOT HAVE
ΔT_global_average₁[ΔT_1₁, …, ΔT_n₁],
…,
ΔT_global_averageₙ[ΔT_1ₙ, …, ΔT_nₙ]
as a series of samples of measurements of the SAME measurand where the mean of the sample means estimate μ and standard error is used to estimate σ.
Here is an online course analytic chemists take concerning uncertainty. Pay attention to non-repeatable measurements of pipets where the SD is the appropriate uncertainty versus experimental standard deviation of the mean when repeatable measurements can be made.
https://sisu.ut.ee/measurement/uncertainty
“You insist on not defining the measurand and what measurements are used to define the measurand.”
We keep going round in circles because you ignore everything I say. The measurand is defined as a global average temperature (or more usefully anomaly) over a specific time frame, e.g. a month or a year. The population is the entire surface temperature over that period. The measurements are the individual station data, but they do not define the measurand, they are just measurements of it.
“Instead you are only interested in the statistical calculations you can do.”
The statistical calculations are to get as best estimate as you can of the measurand.
“If the Global Average ΔT is the measurand”
You need to define what you think ΔT is first. If you mean anomaly, call it anomaly. You keep confusing yourself if you keep thinking of the anomaly as a change, let alone a rate of change.
I’m not sure what you think you are trying to say after that. You seem to be worried about subdividing values, but it isn’t really a problem how you group data. Think of the monthly global average as made up of 30 daily global averages, think of the monthly global average as made up of 1000s of monthly averages from each station, or put all the readings together and just take the average – the result should be the same.
The actual method of calculation is going to require a specific order, you don’t just throw all the data into one pot. And as I keep trying to explain the uncertainty is not simply the SD / √N.
And please stop posting random links to courses I might want to study. They never say what you think they say. If you think it says something important, quote that part and explain what you think it means.
Maybe you have in mind this part
But that seems unlikely, as it’s just saying what I’m saying. The uncertainty of the global mean is what you need when talking about the global mean. The uncertainty of the single measurement is what you need if in the unlikely event you want one random temperature measurement taken anywhere on the globe.
Or maybe you meant this
Something we’ve been trying to tell you for some time.
“It’s a metric for uncertainty, but not for the uncertainty of the average. I’m really not sure why you think it would be.”
You are Equivocating again!
“You really never seem to engage with what an average is. You never understand that an average is more stable than the sum of it’s parts.”
Being more stable is *NOT* a measure of the measurement uncertainty of the average. You continue to fail to understand that as the hump around the average gets smaller and broader that the uncertainty of the average grows!
“That there will be less variation in the mean of a month or a year than there will be in the individual days.”
So what? That has nothing to do with the measurement uncertainty of the average over a month, a year, or a day!
“The standard deviation of global temperatures tells you how much variation you will expect to see if you are beamed down to a random point on the surface.”
As the variance grows so does the standard deviation! Meaning the *average* of what you will see in a random spot on the surface will be less and less certain!
You STILL don’t understand measurement uncertainty. It’s not obvious that you have any kind of understanding of what the variance of a data set is actually telling you!
You are *still* stuck in the meme that the stated values are always 100% accurate because all measurement uncertainty is random, Gaussian (symmetrical), and cancels. So the average is always 100% accurate and the variance doesn’t matter!
There *IS* a reason why you and climate science refuse to calculate the variance of the temperature data. It would highlight how uncertain the “global average temperature” really is!
Bellman never will understand, it is too much of a threat to his GAT crisis worldview.
“You want to discuss statistics, what is the possible standard deviation of a distribution with a range from +90 to -50?”
Why do you keep asking me to do this work for you, whilst constantly claiming I’m mathematically illiterate?
OK – I’ll use some old BEST gridded data, as that’s gives an estimate of the absolute temperature for each grid point. I’ll use 2022 as that’s the last complete year I have, and I’ll get the average annual temperature for each grid point. The standard deviation is 14.3°C.
Now I’m going to assume that you are still under the delusion that the SD of temperatures is the uncertainty of the mean. That is clearly nonsense. If it were you would expect to see fluctuations in the annual averages of at least ±28°C. What you have here is the uncertainty, if you based your annual average on one randomly chosen location each year.
“Don’t tell me that anomalies have their own uncertainty based on their small values.”
You really don’t like being told the truth do you? Of course anomalies have a smaller distribution across the world. The standard deviation of annual anomalies in 2022 was 0.74°C. That by most peoples standards is a smaller number that 14.3.
“Now I’m going to assume that you are still under the delusion that the SD of temperatures is the uncertainty of the mean. “
You are Equivocating again. By “uncertainty of the mean” are you speaking of the standard deviation of the sample means or the measurement uncertainty of the mean? BE SPECIFIC.
“That is clearly nonsense. If it were you would expect to see fluctuations in the annual averages of at least ±28°C.”
Why would that be nonsense? This would only apply if the annual average over time was Gaussian. If the distribution is skewed it is not unbelievable that such a range is impossible, just that it would be unlikely if it is on the long tail.
You can’t tell what the distribution is from just the average. That is the same idiotic assumption that climate science always makes – everything is Gaussian. What we *really* need to know in order to judge what is “nonsense” is all of the 5-number statistical descriptors: min, max, median, 1st quartile, 3rd quartile.
Why do you always assume Gaussian distirbutions?
And who knows what the fluctuations in annual averages are? This information is thrown away and never reported.
Good point! It also means you don’t know the actual fluctuations in daily, weekly, monthly, decadal, or any other so-called “averages” (which are really mid-point temperatures).
I think you have your signs inverted.
“Talk about physically impossible models.”
It’s not a matter of physically possible. It’s a matter that the uncertainty grows. The uncertainty simply can’t be less than the uncertainty of the measurements themselves. If the measurement stations have a +/- !.5C uncertainty then you simply can’t discern changes in the tenths digit, the hundredths digit, or the thousandths digit!
Even a measurement uncertainty of +/- 0.3C would legislate against being able to discern differences in the hundredths digit!
Averaging can’t change the measurement uncertainty of the data no matter how much climate science wishes it were so. And, like resolution, you can’t just pull lower uncertainty out of your butt by averaging. A resolution in the units digit means that you simply can’t tell what changes in the tenths digit are. Those differences will remain forever cloudy in the crystal ball. Uncertainty is the same way. If you are unsure if a temperature is 1C or 2C you’ll never be able to tell if a change of 0.1C happened! And it doesn’t matter how many elements you average.
“It’s not a matter of physically possible.”
So why even mention it? What use is an uncertainty if it doesn’t describe thing thing that are physically possible?
“It’s a matter that the uncertainty grows.”
That’s your problem. You know I don’t accept it. You know you have not been able to offer any evidence either mathematically literate or experimental. Why do you think endlessly claiming it as if it was true, will be in any way convincing.
And repeating the same nonsense a dozen times in the same comment is even less convincing. If you want people to believe that the uncertainty of the mean is the same as the uncertain to of the sum – you need to actually offer some evidence that doesn’t depend on you being unable to follow a simple equation.
“So why even mention it? “
Because it defines the GREAT UNKNOWN. You can’t know what is happening inside the GREAT UNKNOWN. At least most of us can’t. Climate scientists can do so apparently. Their crystal balls are exceptionally clear. It seems that applies to mathematicians and statisticians also.
“What use is an uncertainty if it doesn’t describe thing thing that are physically possible?”
You are the one saying they are physically impossible, not me.
“You know you have not been able to offer any evidence either mathematically literate or experimental.”
We all know you believe experts like Taylor, Bevington, Possolo, and the ISO are all wrong in how measurement uncertainty works. Only you and your compatriots in CAGW support are correct that all measurement uncertainty is random, Gaussian, and cancels.
” If you want people to believe that the uncertainty of the mean is the same as the uncertain to of the sum – you need to actually offer some evidence that doesn’t depend on you being unable to follow a simple equation.”
I’ve given you all kinds of examples from the real world that show that uncertainty grows. The latest is trying to follow a compass from one point to another over a long distance. In your world all measurement uncertainty from taking compass readings at interim navigation points would cancel and you would wind up at your desired location EXACTLY. The measurement uncertainties would *not* add and result in missing the desired point by more and more the longer the distance becomes. You believe you could just average the measurement uncertainties involved at each navigation point and that would define how far you would miss the desired location. In other words the “average measurement uncertainty” is the “measurement uncertainty of the average”.
Travel that same distance with the same interim navigation points at different points on the globe and you will get different results for how far you miss your desired target. I’m not even going to explain why because you won’t believe it anyway. The same thing applies to different temperature measurements taken at different locations in different environments. You simply can’t “average” all of the measurement uncertainties and say that is the measurement uncertainty of the average of all the measurements.
What you are saying shouldn’t even make sense to a statistician. As you add more and more random variables into your dataset, the variance of the resulting data set will grow. Meaning the hump surrounding the average value gets lower and lower and broader and broader. In other words, variance is a direct metric for the measurement uncertainty of the average. As the hump gets lower and lower and broader and broader it becomes less certain what the “average” value actually is. The close in values look more and more like the “average”.
Bottom line? The measurement uncertainty GROWS. And it grows in the same manner as the variance does. The average variance (measurement uncertainty) of the data points doesn’t define the uncertainty of the average, the variance (measurement uncertainty) of the total data set defines the uncertainty of the average. It’s the total variance that defines how spiked and broad the distribution becomes, not the average variance of the data points.
“You know why.”
Channels our visit with my 11 year old grand daughter.
Me: You need to talk to Peggy about why you can’t get along with her
Her: She knows what she did.
5…
He thinks the SEM and confidence intervals are “uncertainty”.
As I said, it’s based on Bayesian statistics. There’s no SEM or confidence interval.
Fallacy alert: I never indicated there were.
“No physical uncertainty bounds. Meaningless.” — Pat Frank
While we wait for Pat to explain what he means by that, maybe you could try to explain it?
I just posted a response to explain it. There is no physical functional relationship between time and ΔT. You can not accurately predict ΔT based on a linear regression against time.
Do this. Take the trend as of 12/31/2023 and carry it out through May 2024. For January 2024 show what the trend showed and what actually occurred. Calculate the percent error between the two. Do that for each month through May.
Example:
January trend value = 1.0
January actual value = 1.5
January % error = 0.5 / 1.0 = 50%
My model does not include time as an independent variable. I’ve told you what factors are included. Not is it based on nthly data.
You can see from the graph that it is quite bad for 2023. That’s why this current hot spike as still a mystery. If it is caused just by the El Niño then it is behaving in a way that hasn’t been observed during previous El Niños.
That does not invalidate the model. All models are simplifications. Nor is the point to predict the future. The point is to demonstrate that you can explain mosy of the patterns using nothing but CO2 modified by a few known factors such as ENSO. Nor I’d the point to prove that CO2 is the cause of warming – correlation does not imply causation and all that. What it does do is give some evidence to support the widely known hypothesis that increasing CO2 will increase temperatures.
There’d lots of extra work that could be done to test the strength of the model, and check for over fitting. if you remember I did that for you before using UAH monthly data, fitted to data up the start of the pause, and tested against how well it fitted the later data. But as usual you pretended not to understand the implication, that you could explain the pause on terms of warming caused by CO2 modified by ENSO.
I don’t think it is bad or mystery though. Assuming the grey area is the 95% CI we expect about 7 points to fall outside it. I only counted 4. So broadly speaking the model is not inconsistent with the observations even when 2023 included. In other words we expect excursions like 2023.
You kinda answered the question didn’t you?
You forget, I’ve been here before trying to forecast what call volumes and call lengths were going to do for the next two years. Lots of money tied up in that. Construction, maintenance, investment, people, etc.
Simple trends worked for places with little growth. For places with declining or expanding populations and business, simple trends failed every time. One had to dig into the details, employment forecasts, government approval processes, plats, business plans, real estate agents, on and on. Climate is no different, just ask the modelers.
You aren’t even modeling the correct thing. On a water world, temperature is only part of the process, total enthalpy is determined by latent heat. Internal heat transport becomes especially important. Heat transport will be a hard nut to crack. Fluid flow isn’t just in one direction it is in all directions, up, down, sideways, laminar, turbulent, and changing constantly. Simple statistics using CO2 driven processes is a joke.
How many more times does it need to be explained, I am not trying to forecast the future?
Nor am I trying to create a circulation model, or any thing like that. It’s simply looking at what the correlation is between multiple variables. The point which you keep dancing round is that it does demonstrate that you can explain most of the warming using CO2. That doesn’t prove it’s the cause, but it does give the lie to claims that there is no correlation.
Correlation without some hypothesis for causation can’t be shown to be anything other than spurious. We’ve been down this road before. Lot’s of things show correlation with no possible interlink for causation – they are spurious correlations. *YOU* are the one putting on an emphasis that causation is possible, not the data itself.
“Correlation without some hypothesis…”
We have a hypothesis.
Who are “we”, Mr. Arrogance?
The world and every one in it. The hypothesis is available to every one. It’s a lie to claim the hypothesis doesn’t exist.
No, you do *not* have a hypothesis. A hypothesis defines a causal relationship that can be tested and falsified. No one that I know of has produced a hypothesis for global warming that can be tested. Data matching algorithms are not hypotheses. Linear regressions are not hypothesis.
To do science, one must state an hypothesis and the functional relationship that supports it. Then one performs experiments to quantify how well the functional relationship meets the observations.
Climate science has only done one part of this, state a hypothesis. No functional relationship and no experiments.
Just like you are doing, only correlation rules the day. You can curve fit all you want, it is not validated by making predictions of what will occur given defined inputs.
What temperature does your “model” show for a CO2 concentration from 400 to 560? What is the uncertainty?
“A hypothesis defines a causal relationship that can be tested and falsified.”
“To do science, one must state an hypothesis and the functional relationship that supports it.”
Add “hypothesis” to the long list or words the Gormans use incorrectly.
Point to a single definition of hypothesis that requires a “functional relationship”. How would you find a functional relationship between children’s heights and their parents? Siblings can be very different heights.
Hypothesis – Wikipedia
Scientific hypothesis | Definition, Formulation, & Example | Britannica
On the scope of scientific hypotheses | Royal Society Open Science (royalsocietypublishing.org)
Do these suffice. Note, in order to perform empirical tests, one must have a functional relationship that describe the independent variables to use and the resulting dependent variables to measure. Without the relationship, one can not prove experimentally if the relationship is valid.
As I thought – you couldn’t find one.Not a single mention of a functional relationship.
“Note, in order to perform empirical tests, one must have a functional relationship that describe the independent variables to use and the resulting dependent variables to measure”
Complete nonsense. You still don’t seem to understand what a functional relationship means. I would guess most research is based on statistical models not functional relationships.
If my hypothesis is that a drug will improve recovery, I would test against the null-hypothesis that the drug has no effect. You do not need a functional relationship between the drug and recovery time, just significant evidence of a correlation. There can be no functional relationship because you simply don’t expect the drug to have the same effect on every patient.
I’m glad you’re not my general practitioner 🙂
Or druggist!
Really? You would prefer it if your GP assumed all people will react identically to the same drug?
That wasn’t the claim. Dose rates and side effects vary between patients and over time.
The claim was
I still expect perindopril argenine to lower blood pressure, paracetamol to be a mild painkiller, barbiturates to be depressants and amphetamines to be stimulants.
But that’s not a functional relationship. A functional relationship means that the same input will always give the same output.
https://online.stat.psu.edu/stat415/book/export/html/875
Whether Jim actually means a functional relationship, or is just misusing the term, is another question. I’ve asked him many times what he means by it, but have never got a coherent answer.
A functional relationship for a drug means that it will cause a given reaction to chemicals and/or cellsin the body.
I take loratadine. It works as follows:
The process of how drugs work is part of the approval process for given treatments. Can they have off-label reactions, certainly.
Your straw man arguments are tiring.
You keep complaining that I don’t post references, but when I do you ignore it and just assert your definition is correct.
I suspect you assume “functional” on this context means “how something works” which is an easy mistake to make given the word has overloaded meanings. But im the context of a functional relationship it means “like a function” and not “working, practical, useful” or any of the othe synonyms.
And none of this would be s problem if you didn’t keep throwing these terms into your comments without understanding their meaning. It just ends up being a massive distraction from the point you think you are making.
I assume that what you actually mean is that in your opinion any hypothesis must be defining s physical process. That’s where I’m disagreeing with you. Hypothesis can be tentative speculations, based on some observed phenomenon. You do not need an explanation as to why there is a correlation for the hypothesis to be valid. You might not try to understand why it happens before you’ve tested if it does happen.
And in the case of CO2 it’s not relevant, because the “why” was around long before the evidence that it does happen. The prediction that rising CO2 levels woulf rise temperatures is around long before it could be tested.
Ahh, the joys of ambiguity arising from overloaded terms.
“But that’s not a functional relationship. A functional relationship means that the same input will always give the same output.”
More malarky. The fact that you don’t know all the factors in a functional relationship is *NOT* proof that there is no functional relationship!
There *is* a functional relationship between the amount of heat injected into a material and its temperature rise. BUT you need to know the mass of the object and the specific heat of the object in order to calculate the temperature rise.
The fact that you may not know the mass or the specific heat in every case doesn’t mean there isn’t a functional relationship.
There *is* a functional relationship between the specific alloy of silver you are working with and its melting point. But its not always obvious what the actual alloy makeup is for a specific piece of silver you are working with. You therefore start soldering it with a low temp flame and progress up as you need. The fact that you don’t know the inputs to the functional relationship doesn’t mean the functional relationship doesn’t exist!
There *is* a functional relationship at play. The issue is whether or not you know all the factors involved in the functional relationship. With drugs used on living things the functional relationship can be so complex that it is impossible to calculate beforehand. E.g. drug takeup into target cells is certainty a factor in the functional relationship. And that take up rate depends on even more factors.
But that doesn’t mean there isn’t a functional relationship.
“The issue is whether or not you know all the factors involved in the functional relationship.”
You are never going to know all the factors. And more to the point, you do not need to know all the factors to have a hypothesis. The hypothesis may just be that there exists a correlation between two factors.
From your own quote “A hypothesis is a tentative statement about the relationship between two or more variables.”
You are insisting that all hypothesis have to describe a complete system with exact predictions. But it’s usually only once you’ve done the experiments that you can begin to describe the detailed model. From the same Wiki page
“You are never going to know all the factors”
That does *NOT* mean the functional relationship doesn’t exist!
“And more to the point, you do not need to know all the factors to have a hypothesis.”
You do if you want an ACCURATE guess as to what the outcome of an experiment will be. The more factors you can identify and measure the more accurate your estimate of the experiment outcome will be.
Hypothesis: Water boils at 212F.
Won’t the experimenter be surprised when he measures the boiling point of water in a car radiator!
Again, not knowing all the factors in a functional relationship does not mean that it doesn’t exist.
“The hypothesis may just be that there exists a correlation between two factors.”
That’s a STATISTICAL hypothesis, not one associated with natural phenomenon. It actually requires TWO hypotheses, not one.
“That does *NOT* mean the functional relationship doesn’t exist!”
your claim was you couldn’t have a hypothesis without defining the functional relationship.
“That’s a STATISTICAL hypothesis”
and that’s the basis of much of science.
“your claim was you couldn’t have a hypothesis without defining the functional relationship.”
I didn’t claim you had to have an ACCURATE functional relationship.
“and that’s the basis of much of science.”
Not physical science associated with natural phenomenon – e.g. physical measurements of objects.
Statistical hypotheses are *very* subject to spurious correlation. That is one reason why physical hypotheses must be *testable”. Statistical hypotheses do not lend themselves to being testable. I.e. CO2 causing global warming.
“You are insisting that all hypothesis have to describe a complete system with exact predictions. But it’s usually only once you’ve done the experiments that you can begin to describe the detailed model. From the same Wiki page”
As usual, you didn’t even bother to read what you quoted. Cherry picking at its finest!
“From your quote: “the hypothesis that a relation exists cannot be examined the same way one might examine a proposed new law of nature.”
Global warming involves laws of nature and NOT a statistical correlative relationship. Correlation does *NOT* imply causation. For laws of nature you need causative relationships in order to determine if the correlation is causative.
All climate science has is a statistical correlation between CO2 and global warming. And its not even a good statistical correlation as CoM has shown.
Climate science keeps searching for causation but they’ve never even been able to define a testable proposal let alone any set of measurements that define anything other than correlation! The climate models are nothing more than a data matching exercise no matter how complex climate science says they are.
“Global warming involves laws of nature and NOT a statistical correlative relationship. “
Yes that’s the point. There is a hypothesis based on hypothesised natural laws that the more CO2 in the atmosphere the hotter the planet gets. We performed a decades long experiment to test this, and the result is that the more CO2 Put into the atmosphere the hotter planet had got. That does not prove the hypothesis correct, correlation not implying causatiom, you never prove a hypothesis. But what it does is fail to falsify the hypothesis. And hence us evidence in support if the hypothesis.
“And its not even a good statistical correlation as CoM has shown.”
Astonishing levels of “only accepting evidence that agrees with your religion”. I’ve just given you a statistical test that looks at 150 years of data and shows a highly significant correlation. You’ve spent the last couple of days coming up with as many ways as possible to reject it, including claiming the annual uncertainties are 15°C. Yet a self selected 7 year trend with zero statistical significance is enough to convince you that the correlation does not exist.
“There is a hypothesis based on hypothesised natural laws that the more CO2 in the atmosphere the hotter the planet gets.”
You keep forgetting that the hypothesis must be testable to be legitimate.
I can hypothesize that evil angels are causing the warming. It’s of no more use than hypthesizing that CO2 causes it!
“We performed a decades long experiment to test this, and the result is that the more CO2 Put into the atmosphere the hotter planet had got. “
Malarky! CO2 has been higher when we were in a glacial period!
“you never prove a hypothesis”
Tell this to Newton. Tell this to Gauss. Tell this to Ohm.
” I’ve just given you a statistical test that looks at 150 years of data and shows a highly significant correlation. “
Statistical tests can only show correlation. They are *not* a substitute for a hypothesis that lays out a functional relationship that can be tested and can show a causal relationship.
“Yet a self selected 7 year trend with zero statistical significance is enough to convince you that the correlation does not exist”
If CO2 and temp is not correlated then they aren’t correlated. A self-evident truth. What is implied is that the CO2 vs temp relationship is a spurious correlation.
“Statistics is no substitute for Physics” — Pat Frank
Ye Gods and Little Fishes, he’s still wittering on about this. Quite amazing the lengths people will go to avoid accepting something that doesn’t agree with their religious beliefs. Just impossible to accept that a correlation between CO2 and temperature is evidence that the greenhouse effect is real. So instead we have hand waving about it being physically impossible, ever more inflated claims about uncertainty – currently up to 15°C. And then deep dives into epistemology, to try to erase the idea that there is even an hypothesis.
So let’s trawl through this again.
“You keep forgetting that the hypothesis must be testable to be legitimate. ”
It is testable. I tested it, remember, that’s why you are so agitated. It did not fail the test.
“I can hypothesize that evil angels are causing the warming.”
If you can demonstrate a statistically significant correlation between evil angles and global temperatures, I’ll listen. But first you will have to demonstrate how you measured them.
“Malarky! CO2 has been higher when we were in a glacial period! ”
Strange how quickly you ignore uncertainty when it suits you. I assume you are talking about the Ordovician period, over 400 million years ago. The “experiment” I’m talking about was conducted over a period when most other conditions were more or less constant. Go back 400 million years and you also need to take into account all the major differences, especially the the fact that solar output was somewhat less.
““you never prove a hypothesis”
Tell this to Newton. Tell this to Gauss. Tell this to Ohm.”
OK, I will when I see them next, but I think Newton may object to the claim that he proposed hypothesis.
I take it you are not a fan of Popper. Neither am I, but I still think that it’s impossible to prove anything in an absolute sense. Best you can say it’s it’s probably true, or almost certainly true, but that requires at some level statistical reasoning.
“Statistical tests can only show correlation.”
Not really true, and by the same token, non-statistical tests might only show correlation.
“If CO2 and temp is not correlated then they aren’t correlated.”
You just keep proving you know nothing about statistics.
“ust impossible to accept that a correlation between CO2 and temperature is evidence that the greenhouse effect is real.”
You continue to show how little you know about the real world. You are a blackboard genius. In the real world correlation does *NOT* provide evidence of causation. Correlation is *not* evidence of a hypothesis validity concerning physical laws.
“So instead we have hand waving about it being physically impossible,”
Your lack of reading comprehension skill is showing again. What is being claimed is WE DON’T KNOW. Climate science has yet to come up with a way to actually test the hypothesis – except for believing as you do that correlation is causation! It’s like measurement uncertainty and the true value of a measurand – it’s part of the GREAT UNKNOWN. It’s only you and climate science that base your life decisions on your cloudy crystal balls.
“It is testable. I tested it, remember, that’s why you are so agitated. It did not fail the test.”
No, you didn’t. Correlation does *NOT* prove causation in the REAL WORLD.
“If you can demonstrate a statistically significant correlation between evil angles and global temperatures, I’ll listen. But first you will have to demonstrate how you measured them.”
You TOTALLY missed the point! It doesn’t matter if I can measure the growth rate of evil angels. CORRELATION IS NOT CAUSATION!
You proclaim that you know correlation is not causation just like you know that measurement uncertainty is not random, Gaussian and cancels. But you *always* wind up using both of these memes in whatever you are asserting in the moment. Cognitive dissonance at its finest or just plain troll tactics – only you know which.
“The “experiment” I’m talking about was conducted over a period when most other conditions were more or less constant.”
More malarky! Even NOAA is now saying that CO2 is *NOT* well-mixed. Yet you and climate science just keep on using that assumption as proof of *global* climate change.
“but I think Newton may object to the claim that he proposed hypothesis.”
Hypotheses become laws through testing and validation. Newton *started* with a hypothesis!
“Not really true, and by the same token, non-statistical tests might only show correlation.”
Yes, really true. Statistical testing requires a null hypothesis and an alternative. Where has climate science actually provided either? Especially the alternative.
Any test that depends on correlation is *NOT* a valid test for determining causation. Determining causation AUTOMATICALLY provides correlation, even if it is a negative correlation!
He’s letting his true agenda slip through the cracks: CAGW.
And he continues his coy act in this regard.
At this point I’m wondering if “evil angles” are measured in degrees or radians.
“No, you didn’t. Correlation does *NOT* prove causation in the REAL WORLD. .”
You are still confusing “proof” with “evidence”. The fact you keep writing in all caps doesn’t prove you are wrong, but it is evidence.
“You TOTALLY missed the point! It doesn’t matter if I can measure the growth rate of evil angels. CORRELATION IS NOT CAUSATION!”
But you haven’t shown any correlation for your hypothesis, you haven’t even established that evil angles exist.
“More malarky! Even NOAA is now saying that CO2 is *NOT* well-mixed. Yet you and climate science just keep on using that assumption as proof of *global* climate change. ”
What has that got to do with my point, which was about the difference in global conditions 400 million years ago?
“Newton *started* with a hypothesis!”
What do you think he meant by Hypotheses non fingo?
“Statistical testing requires a null hypothesis and an alternative.”
Depends on the type, but yes.
“Where has climate science actually provided either?”
Alternative hypothesis, temperature will increase with increasing CO2. Null-hypothesis, there will be no correlation between atmospheric CO2 and temperature.
“Any test that depends on correlation is *NOT* a valid test for determining causation.”
Not true. But in this case we are not testing for causation, only correlation. Demonstrating a correlation is rejecting the null-hypothesis. As such it provides evidence for the alternative hypothesis. Hence we did not falsify the hypothesis.
“You are still confusing “proof” with “evidence”. The fact you keep writing in all caps doesn’t prove you are wrong, but it is evidence.”
You *really* don’t understand science at all, do you? Evidence *is* proof until other evidence contradicts it! The problem with climate science and “CO2 drives temperature” is that climate science has never provided a test methodology that will give evidence of the validity of that hypothesis. Correlation is not evidence. Models are not evidence.
Correlation is not even a hint. There is no way to judge purely from correlation whether it is just a spurious relationship or a functional relationship.
“What has that got to do with my point, which was about the difference in global conditions 400 million years ago?”
You really don’t even understand what you posted, do you?
” Go back 400 million years and you also need to take into account all the major differences”
Go back 30 years and you also need to take into account all the major differences – yet climate science doesn’t because it can’t even identify the major differences let alone quantify them! And you somehow believe that correlation between CO2 and temperature is the only major difference that matters, up to and including ignoring the lengthy pauses that have occurred in the relationship.
“Alternative hypothesis, temperature will increase with increasing CO2. Null-hypothesis, there will be no correlation between atmospheric CO2 and temperature.”
That is *NOT* the hypotheses used by climate science. Their hypothesis is that CO2 causes global warming. The alternative hypothesis is that correlation between CO2 and temperature will prove the relationship.
The lengthy pauses show that your alternative hypothesis is *true*, there will be no correlation between CO2 and temperature. A “sometimes the correlation exists and sometimes it doesn’t” is *NOT* proof of a functional relationship.
“Evidence *is* proof until other evidence contradicts it!”
Then you really need to explain what definition of “proof” you are using. Usual it implies certainty, or at least close to certain. The idea that you have proven something, only to have it be shown to be false as soon as other evidence arrives is a contradiction. If it was wrong it wasn’t proven.
Rest of your rant ignored.
Your lack of science training is showing again. Take Ohm’s law, E = IR. That functional relationship was thought to be true for decades, it predicted observations quite accurately. Later on it was found with improved instrumentations that it is not strictly true on the quantum level, it was true only on a long-term average base. The current is actually an integral of several factors, factors of which Ohm was unable to identify because of the level of technology involved.
Now, look at climate science. The climate models have *still* not done anything to develop a quantitative functional relationship for clouds, opting to just guess at the value of a fixed parameter. As far as I can find in the literature they haven’t even developed a proposed test suite from which a functional relationship can be derived.
How many other factors have they done this with? How many factors do they not even know about?
Remember, Ohm’s Law *worked*, at least for the level of technology available at the time. It worked everywhere and it worked all the time. It was a functional relationship that matched observations. It was a “proven” functional relationship.
That just isn’t true for climate science. Their “functional relationship” concerning CO2 and temperature does *NOT* match observations. Nor does it work everywhere. Nor does it work all the time. Their “functional relationship” between CO2 and temperature has *not* been proven to be true in any way, shape, or form.
“The idea that you have proven something, only to have it be shown to be false”
It’s not just a binary choice, “true” or “false”. Add in “incomplete”. Ohm’s Law was “incomplete”. It worked at the macro level but not at the quantum level. The “Climate Science Law” that CO2 controls temperature because of the correlation between the two doesn’t even work at the macro level, let alone the quantum level.
I wish you could get through a whole post without starting with snide ad hominems. It really undermines your comments, even when you are making better points, like here.
I think your general point here is that in your view only science that can have an exact predictable result is the only true science. I disagree. Most science is messy.it relies on identifying causes that are hidden amidst lots of variable data. Saying these are not true science means dismissing much.
You cannot control the earth’s climate in a laboratory. You cannot make an exact prediction of the temperature at any point on the earth based on how much sun it gets. The world is just to complicated. It has ocean currents and winds and different ltypes if surface, and they all interact in complex sometimes unpredictable ways. But that doesn’t mean you can dismiss the hypotheses that the amount of sunlight will affect the temperature.
World is crazier and more of it than we think,
Incorrigibly plural.
“ Take Ohm’s law, E = IR. That functional relationship was thought to be true for decades”
Which is why you should make a distinction between thinking something is true, and proving it is true.
You don’t make any mention if the assumptions in Ohm’s law. The functional relationships ly hold for ideal conditionssuch as no change in temperature.
“It’s not just a binary choice, “true” or “false”. Add in “incomplete”. “
Personally I prefer to think of it as not true or false, but a spectrum of probabilities. Nothing can be proven to be true or false but we can day the probability approaches 0 or 1. Although I think almost 1 is a more difficult proposition. It’s usually close to one given any number of assumptions assumptions.
“Which is why you should make a distinction between thinking something is true, and proving it is true.”
Ohms’ law is *still* in use today in its original form. I use it every single day. I do *NOT* do a quantum statistical analysis every time I want to calculate what the current flow over a 14ga wire that is 100′ long will be. I prove it to be true with every single use.
“You don’t make any mention if the assumptions in Ohm’s law. The functional relationships ly hold for ideal conditionssuch as no change in temperature.”
Thank you again Captain Obvious! Where do you think the theory for superconductors came from? Why do you think I’ve been talking about component drift due to heat effects during operation? Calibration drift in electronic components *is* a very real thing and a major component is drift in value from thermal effects.
In essence, you just confirmed my assertion that climate science makes no attempt to quantify the systematic bias resulting from thermal drift. Since it impacts temperature readings it gets included in the “global warming” values, even in the anomalies. That invalidates the assumption that all measurement uncertainty is random, Gaussian, and cancels all by itself!
“I prove it to be true with every single use. ”
Again, you really need to define what you mean by “prove”.
We are probably talking at cross-purposes again. I take prove to mean you have shown that something is certainly true, not just for the circumstances you’ve tested, but for all possibilities. It’s impossible to prove that is the case empirically, becasue you can’t test all possible values, and even if you could, it only shows it only worked when you tested it, not that it will always work.
But if you take “prove” to mean it seems to work every time for normal uses, and it’s never let me down. Then yes you can say it’s a proven principle. It’s just not what I would take to mean a scientific proof.
I’m not sure if Ohm’s law is a good example in any case – Newtonian gravity is a better example of a law that can be shown to be correct across most practical circumstances, but is still ends up being dis-proven as a universal law.
It is not a universal law, and that’s nothing to do with the “quantum level”.
Here’s a comment that sums it up:
https://physics.stackexchange.com/questions/195016/how-can-one-derive-ohms-law
“I take prove to mean you have shown that something is certainly true, not just for the circumstances you’ve tested, but for all possibilities.”
I didn’t say this. You are making up another strawman to argue with.
“It’s impossible to prove that is the case empirically, becasue you can’t test all possible values”
The answer for this should be: “WE DON’T KNOW”. It should not be “We guess it is this.”
Ohm never said his functional relationship was universal. It described the current flow through a conductor in an electric circuit. Perhaps if you bothered to actually *read* for understanding you would have picked up on this. Instead you just do the usual, cherry pick something you think confirms your mistaken beliefs.
If you’d actually read your quote you would have realized this. “Rather, Ohm’s Law is an idealization of the observed behavior of these materials.”
“I didn’t say this. You are making up another strawman to argue with.”
Huh? I’m saying what I take “prove” to mean. I’m asking you to explain what you take it to mean. I’m still waiting.
“I think your general point here is that in your view only science that can have an exact predictable result is the only true science. I disagree.”
I never said this. Stop putting words in my mouth. Stop creating strawmen to argue with.
I *told* you that Ohm’s law was improved through the use of quantum effects. Those are statistical equations. Do I need to dig out my textbooks from 1968 where we studied electron tunneling through an energy barrier in a semiconductor junction? While it is a statistical process, when integrated over a period of time the results all converge on a value whose variation is less than measurable.
“Most science is messy.it relies on identifying causes that are hidden amidst lots of variable data. Saying these are not true science means dismissing much.”
Another strawman! No one is dismissing the quantum effects in a semiconductor junction yet they are not applicable in a circuit design at the macro level.
“You cannot control the earth’s climate in a laboratory. “
Thank you Captain Obvious! Why would you want to?
“You cannot make an exact prediction of the temperature at any point on the earth based on how much sun it gets.”
Why not? Since climate is very much dependent on temperature over time you just basically said the climate models are junk since they can’t predict temperature.
“But that doesn’t mean you can dismiss the hypotheses that the amount of sunlight will affect the temperature.”
And you can’t dismiss the fact that pauses in temperature rise happen. That means that *something* other than CO2 is a major factor – which climate science totally ignores. It’s probably clouds and they just guess at an “average” value that applies everywhere on the globe when it is obvious that clouds vary not just locally but also globally.
“I never said this. Stop putting words in my mouth. Stop creating strawmen to argue with.”
I was not putting words in your mouth. I was giving my opinion of what I thought you were trying to say. Hence starting with “I think your general point here…”. And I’m not going to be lectured on strawmen from someone who continuously claims I’m saying all uncertainties are random and Gaussian, when I’ve repeatedly explained why I’m not saying that.
“That means that *something* other than CO2 is a major factor – which climate science totally ignores.”
Nobody ignores El Niños. There is nothing remotely surprising that over a short period, temperatures cool down after an El Niño.
Your double standards is so obvious here. You reject out of hand 140 years of data showing a statistically significant correlation between CO2 and temperature, just claiming it’s meaningless and claiming uncertainty is ±15. But then claim a 7 year period proves something.
“ I was giving my opinion of what I thought you were trying to say.”
That’s the very definition of putting words in my mouth.
“I’m not going to be lectured on strawmen from someone who continuously claims I’m saying all uncertainties are random and Gaussian, when I’ve repeatedly explained why I’m not saying that.”
You saying this doesn’t catch it. Every assertion you make proves that I am right – right down to saying that the standard deviation of sample means is the measurement uncertainty of the mean. You can run but you can’t hide – the meme shows up in everything you post.
“Nobody ignores El Niños. There is nothing remotely surprising that over a short period, temperatures cool down after an El Niño.”
You can’t even get cause and effect straight! El Nino is a *result* of heat accumulation, not a *cause* of heat accumulation!
“ou reject out of hand 140 years of data showing a statistically significant correlation between CO2 and temperature”
I reject that correlation is causation. The fact that you assert correlation is evidence of causation is just perfect proof of your lack of physical science training. It’s no wonder you believe all measurement uncertainty is random, Gaussian, and cancels.
Don’t blink!
“You are never going to know all the factors. And more to the point, you do not need to know all the factors to have a hypothesis. The hypothesis may just be that there exists a correlation between two factors.”
A hypothesis must be testable. Correlation is not a valid test since there is no way to determine if the correlation is spurious or causal.
If you don’t know all the factors then the experiments you design to test your hypothesis is not going to give the measurement results you expect. You then have the option of refining your hypothesis by adding in factors or abandoning your hypothesis for a different one.
“From your own quote “A hypothesis is a tentative statement about the relationship between two or more variables.””
In the physical world the relationship must be functional and testable. Apparently no one in statistical world cares. A spurious correlation is as good as a causal relationship.
“You are insisting that all hypothesis have to describe a complete system with exact predictions. But it’s usually only once you’ve done the experiments that you can begin to describe the detailed model. From the same Wiki page”
So what? Hypotheses get refined every single day! That doesn’t mean that the hypothesis isn’t defining a functional relationship!
If my hypothesis is that water boils at 212F – a functional relationship – and then I find out that the water in an auto radiator boils at a different temperature then I refine the functional relationship – perhaps to add pressure as another factor in determining the boiling temperature.
I’ll repeat one more time – it glaring obvious that you have absolutely *NO* experience in the real world of physical science.
from wikipedia: “A hypothesis is a tentative statement about the relationship between two or more variables. It is a specific, testable prediction about what you expect to happen in a study. It is a preliminary answer to your question that helps guide the research process.”
The fact that you have two variables with a relationship means you have a functional relationship.
from wikipedia: “For proper evaluation, the framer of a hypothesis needs to define specifics in operational terms.”
Operational terms implies a functional relationship, not just a correlation.
It’s pretty obvious that you are only familiar with statistical hypothesis testing – which requires TWO hypotheses, the null hypothesis and an alternative hypothesis. It is a correlational method, not a scientific functional relationship that is typically used for hypotheses concerned with natural phenomenon.
“My model does not include time as an independent variable.”
Huh? Your horizontal baseline is in years. Since when is “year” not a time index?
You’re memory is so bad. You’ve repeated this nonsense numerous time and I keep explaing why you are wrong.
Time is not part of the model
I know because I wrote it. The model it Anomaly ~ log(CO2) + ONI + AMO +AOD + TSI.
Time is not used. The only way time comes into it at all is I use a lag of 1 year on each variable. I doubt if that makes much of a difference.
The fact that I then show the predicted values on a time series graph does not mean time is being used. It ‘s just easier to understand then if the plot was in a random order.
“The only way time comes into it at all is I use a lag of 1 year on each variable.”
ROFL!! You use time as an index but time is not an index. Your cognitive dissonance is showing again!
This like pulling teeth. Time is not a variable in the model. At no point does it multiply the year by a constant. It’s used as an index simply to ensure that the response on temperature ion one year is based on the independent variable states from he previous year. That does not make time an independent variable. There is no assumption that things will change linearly over time. If the states of the independent variables are the same in 1999 as they were in 1899, the model would predict the same average value in 2000 and 1900.
“That does not make time an independent variable.”
Of course it does. The slope of the line has “time” in the denominator. That makes time an independent variable in the relationship!
This is really pathetic. You make a mistake and when I correct you, you just accuse me of lying. Time is not an independent variable in the model. I know because I wrote it. If you don’t believe me, try it yourself.
For the record here’s the summary
Family: gaussian Links: mu = identity; sigma = identity Formula: GISS | mi(ci95/2) ~ CO2t + ONI2t + AODt + AMOt + TSIt Data: cli.df (Number of observations: 141) Draws: 8 chains, each with iter = 5000; warmup = 2500; thin = 1; total post-warmup draws = 20000 Regression Coefficients: Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS Intercept -49.20 25.98 -100.40 1.97 1.00 33735 17085 CO2t 2.42 0.05 2.32 2.52 1.00 32503 18625 ONI2t 0.08 0.01 0.06 0.10 1.00 31422 15543 AODt 0.08 0.02 0.04 0.13 1.00 31013 17269 AMOt 0.21 0.04 0.12 0.29 1.00 30878 15474 TSIt 0.02 0.02 -0.02 0.06 1.00 33829 16841 Further Distributional Parameters: Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS sigma 0.07 0.01 0.06 0.08 1.00 16501 14495 Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS and Tail_ESS are effective sample size measures, and Rhat is the potential scale reduction factor on split chains (at convergence, Rhat = 1).Time is simply not a variable. I just drew time series graph with the prediction for each year as a convenient way of showing what the output looks like.
I could have given you the graph with CO2 as the x-axis. No time in the graph at all.
Or against the ENSO conditions.
Or against TSI
TSI again with the graph.
The ENSO and TSI counts seem to be somewhat left skewed. As Julius Sumner Miller would have said, “Why is this so?”
That’s similar to the chart I get from ERA5 data, but there are some differences. Could you show the same with ln(CO2) on the X-axis as well, please?
It’s a good fit in a noisy sort of way above about 330 ppm, but why the spike around 310-315, and why the sections of negative correlations at lower values?
Those are the “well, that’s interesting” features.
Then show the relationship of your independent variables and the dimensional analysis that arrives at a value on the x-axis other than time. You have done a dimensional analysis, right?
“You have done a dimensional analysis, right?”
You did aks Pat Frank that question 5 years ago, right?
Its true, blob has a big burr under his saddle about Pat Frank.
You could start by first learning what an uncertainty interval is — and what it isn’t.
Or you could answer my question. If you are claiming that Frank is saying the model is physically impossible just because of claimed massive uncertainty interval that just demonstrates your lack of understanding of what the words mean.
How does thinking there is a couple of degrees of uncertainty in each annual anomaly result in the model being physically impossible?
The fact that he seems unwilling to actually explain his comments, and lives it to the likes of you to interpret his assertions says it all as fat as I’m concerned.
And as always, the hypocrisy of this is fully evident. A single simple model showing a possible link between CO2 and temperature is dismissed as physically impossible due to supposed uncertainties, yet not a peep about alm the claimed pause, or claims that 7 years of CRN data prove that all warming is caused by EL Nińos.
Stop whining.
It’s alright. I didn’t expect an answer.
See above.
Large hat sizes cause these problems.
He’ll never stop. Just like he will never understand uncertainty is not error.
You refuse to understand that you are using ONLY the stated values of the anomalies to create your trend. That is physically impossible since the stated values do *NOT* represent true values. The true values can be anywhere in the uncertainty interval and can be widely different from point to point.
You are *still* stuck in the memes that:
Pat has long ago abandoned trying to educate you on uncertainty and its meaning in the real world. Some of us keep trying but you stubbornly refuse to admit that if the uncertainty interval is larger than your anomaly then you can’t just create ONE, and only one, trend line based on your subjective guesses as to what the true values are.
“You refuse to understand that you are using ONLY the stated values of the anomalies to create your trend.”
Not really true. In my later attempt I included random measurement error in the anomaly values. You of course won’t like that as it’s assuming the errors are Gaussian and random.
“That is physically impossible since the stated values do *NOT* represent true values. “
uou really don’t understand how any of this works. No model will ever be exact. There will always be simplifications. That doesn’t make the model “physically impossible”.
“Pat has long ago abandoned trying to educate you on uncertainty…”
Ha, good one. I needed a laugh.
His education has mostly been him refusing to answer any question, suggesting I was part if a mathematical mafia, and refusing to accept the simplebfact that negative standard deviations cannot be negative.
“you can’t just create ONE, and only one, trend line based on”
Your attempts to “educate” me are hampered by your inability to ever address what I’m saying. I keep telling you there is not one known true trend. Every coefficient in my model has an associated uncertainty. You really need to stop inventing straw men and try to engage with what’s actually being said. And you need to accept the possibility that you are wrong on some points. If you don’t you will always be in your cargo cult – the easiest person to fool yourself.
bellcurveman just can’t accept that error is not uncertainty.
The uncertainty interval includes *both* random and systematic uncertainty. The amount of random uncertainty can’t be determined unless you also know the amount of systematic bias that is involved in the interval.
So he just falls back on his standard meme that all measurement uncertainty is random, Gaussian, and cancels. He just can’t help himself!
Along the way he gets up on his high soapbox to declare that the JCGM got it all wrong in the GUM.
“In my later attempt I included random measurement error in the anomaly values.”
How much random measurement error do you suppose there is in a PTR based temp measuring station?
“You of course won’t like that as it’s assuming the errors are Gaussian and random.”
Measurement uncertainty is not error. Again, what is the cause of random error in a modern PTR temp measurement station?
“His education has mostly been him refusing to answer any question,”
He’s answered you before. As usual, you just blew it off. It’s why I have stopped answering you in some threads. We are not puppets with strings for you to pull. When you continually refuse understand the basic concepts of metrology it just becomes us continually dancing at the end of your troll strings.
Uncertainty is not error. Yet you continue to confuse the two. The SEM is not measurement uncertainty yet you continue to confuse the two. Measurement uncertainty is not random, Gaussian, and it doesn’t cancel yet you continue to use the meme. True values are impossible to know yet you continue to claim your foggy crystal ball makes it possible for you.
” I keep telling you there is not one known true trend”
Yet that is what you continue to try and convince everyone of. Where is the possible trend line that runs from +0.8C to -0.4C in your graph? It is just as possible as the one you show. At each point in the graph any value between -0.4C and +0.8C can reasonably be assigned to the temperature based on the measurement uncertainty interval that *should* be shown with the graph but which you ignore.
“How much random measurement error do you suppose there is in a PTR based temp measuring station? ”
How many measuring stations are used to get the global annual average?
“Measurement uncertainty is not error.”
Indeed – that’;s why I said it was measurement error. The uncertainty estimates the amount of error.
“He’s answered you before.”
Ha ha. So funny.
“When you continually refuse understand the basic concepts of metrology it just becomes us continually dancing at the end of your troll strings.”
I’ll explain again – unless you are prepared to accept the possibility that you are wrong, you will always be the person who can most easily fool themselves. You need to accept that if I’ve already told you that I think your understanding is wrong and non-sensible, it doesn’t matter how many times you repeat it in capital letters, I’m still not going to accept it. If you think you are right you need to engage with what I’m telling you and explain why it’s wrong, and preferably present some evidence, rather than just assertions.
“Uncertainty is not error.”
You keep repeating that truism so many times it’s become a sort of magic incantation. You think it means something, but you never explain what you think it means. How does saying uncertainty is not error help you understand the law of propagation of uncertainty? What difference do you think it makes if you use that tittle rather than general equation for error propagation? The GUM spells out that the result of the equation is the same.
“The SEM is not measurement uncertainty yet you continue to confuse the two.”
And I say it is. The usual stalemate. Now if you could explain what you think the uncertainty of the mean is, if not the SEM, and in particular why it should be the uncertainty of the sum, it would be a start.I keep asking if you want to think of the mean as a measurand, and if not why you think it could have a measurement uncertainty.
“Measurement uncertainty is not random, Gaussian, and it doesn’t cancel”
Then you need to explain why the GUM and Taylor et al, say it can be.
“True values are impossible to know”
Are you ever going to respond to my point, that I agree, that’s why there is uncertainty? As I keep saying, you are never going to “educate” me if you never listen to what I’m saying.
“yet you continue to claim your foggy crystal ball makes it possible for you.”
And there’s the result of never listening. Yet more lies. I mean you even quote me as saying ”I keep telling you there is not one known true trend”, yet then claim I’m saying the exact opposite. You are either suffering from a severe memory problem, or are just arguing in bad faith.
“Where is the possible trend line that runs from +0.8C to -0.4C in your graph?”
There isn’t one for the simple reason that it’s not compatible with the data. Unless you can run your own model and demonstrate that it’s plausible given the data, then stop asking me to do thew impossible.
“It is just as possible as the one you show.”
Then it should be easy for you to demonstrate. I assume you are pinning your hopes on some systematic error in the data that conveniently has just reversed the trend. Aside from the sear improbability of that, you would also have to demonstrate why the same upward trend is seen in independent data sets, such as UAH, and the all the other evidence that suggests it’s warmer today than it was during the little ice age, and certainly not a degree colder.
“ At each point in the graph any value between -0.4C and +0.8C can reasonably be assigned to the temperature based on the measurement uncertainty interval that *should* be shown with the graph but which you ignore. ”
I ignore it because it’s your fantasy not mine.
“How many measuring stations are used to get the global annual average?”
I didn’t ask that. I asked you what the random error is in a PTR based measurement station.
“The uncertainty estimates the amount of error.”
No, it doesn’t. The uncertainty is *NOT* error. Error is *NOT* uncertainty. You are *still* stuck in the “true value +/- error” meme that was abandoned 50 years ago!
“You think it means something, but you never explain what you think it means”
It’s been explained to you over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over … ad infinitum.
The use of “true value +/- error” implies you KNOW the true value. What was recognized in metrology is that you CAN’T know the true value so you can’t know the error. What you *can* know is what the measurement indicated by your measurement instrument is and what the uncertainty of that measurement is. That uncertainty can be a calculated Type A or a Type B estimate based on the knowledge of various factors affecting the measurement.
“And I say it is.”
Because you’ve never actually studied anything for meaning. All you do is cherry pick things you think confirm your mistaken concepts.
It can be explained very simply: The SEM does *NOT* tell you the accuracy of the population mean, only how precisely you have calculated the mean from uncertain data.
The actual value of the average *IS* measured by the variance of the data. The higher the variance the less certain the actual average value becomes. Measurement uncertainty is no different. You want to concentrate on the stated values of the data while ignoring the variance and/or measurement uncertainty of the data.
You simply can’t grasp that your sample data is of the form “stated value +/- measurement uncertainty”. I.e.
m1 +/- u1, …, mn +/- un
*YOU* want to calculate the SEM using only m1, …, mn while ignoring the u1, …, un uncertainties. Those uncertainties *must* be propagated onto the mean calculated from the m1, …, mn values.
“Then you need to explain why the GUM and Taylor et al, say it can be.”
There is ONE, and ONLY ONE, situation where this applies. It is when you are measuring the same thing using the same instrument under the same conditions and there is no systematic uncertainty. This is specifically laid out in *all* the resources. If you would actually study them instead of just cherry picking from them you would understand this.
These restrictions simply don’t apply to temperature measurements!
” that’s why there is uncertainty? “
But you ALWAYS just throw it away! You never actually apply it to anything!
“And there’s the result of never listening. Yet more lies. I mean you even quote me as saying ”I keep telling you there is not one known true trend”, yet then claim I’m saying the exact opposite.”
Because you always just throw the uncertainty away. You *never* show all the possible trend lines in *any* of your graphs. You never mention that you can’t know the actual slope of a linear regression line when the differences are less than the uncertainty interval of the measurements.
“There isn’t one for the simple reason that it’s not compatible with the data.”
Of course it is compatible with the data WHEN YOU LOOK AT THE MEASUREMENT UNCERTAINTY RATHER THAN JUST THROW IT AWAY!
“Then it should be easy for you to demonstrate. I assume you are pinning your hopes on some systematic error in the data that conveniently has just reversed the trend. “
Nope. Just the specified Type B measurement uncertainties of the measuring devices. Such as +/- 1.0C for ASOS stations. Those Type B measurement uncertainties are an estimate of the dispersion of the reasonable values of the measurand. As usual you just want to ignore the measurement uncertainty.
“Aside from the sear improbability of that, you would also have to demonstrate why the same upward trend is seen in independent data sets, such as UAH,”
I have explained that based on real world experience with measurement devices. Metals expand when heated. Expansion changes values, including values of the components in a temperature measurement device. That expansion is cumulative over time. Thin-film components drift further and further over time in the same direction – GO LOOK IT UP!
You, and most of climate science, seem to have NO REAL WORLD EXPERIENCE AT ALL. You assume that MetaRod-1 will expand when heated and MetalRod-2 will contract and so the uncertainties will cancel. It just doesn’t work that way in the real world. The sun degrades plastic in temp measurement devices in the same manner. It doesn’t make some more reflective and some less reflective. And its reflectivity *will* have some impact on the measurement uncertainty of the temperature readings.
“what an uncertainty interval is”
Well, going by the GUM there is no such thing. What I expect you mean is the interval defined by a specific expanded uncertainty U.
The GUM is a little uncertain about what to call this interval, as they say it isn’t strictly a confidence interval, but they do say
So not a confidence interval, but does define a level of confidence.
What ever you call it, it’s clear that it defines the part of a probability distribution that covers a specific probability, usually 95% or 99%.
Like a lot of the GUM I think they struggle with definitions as they want to use Bayesian terms without actually using that word.
What any of this has to do with the model being physically impossible is the question. But I predict karlo’s response to this comment will not be any more enlightening.
Why are you quoting to GUM?
You don’t believe measurement uncertainty even exists.
Prediction confirmed.
All attempts to educate you fall flat, why should I give in to your arrogant demands?
Of course you won’t answer. Far better to keep quite and have people think you are a fool, than speak and confirm it.
“keep quite”
Heh.
Fool.
Throw away everything you know about statistics. Measurements don’t deal in sampling groups, sampling a population, confidence intervals, or any other statistical analysis.
Measurements result from observations. There is no sampling size calculation to obtain a low SEM. That is why the GUM says:
The primary purpose of statistical parameters in measurement uncertainty is to DEFINE an internationally accepted definition of the intervals to use in describing the dispersion of observations values that can be attributed to the measurand.
The GUM also says:
That is the internationally accepted definition of the interval to be used when declaring standard uncertainty.
Expanded uncertainties are normally used in order to achieve a wider interval that includes more of the observations.
The standard deviation of the mean can only be useful under strict conditions, primarily observing the EXACT SAME THING. Atmospheric temperatures fail this right off the bat. It is designed to be used when you measure the same mass for use in calibrating a balance beam. Or a gauge block for calibrating a micrometer before use.
“Throw away everything you know about statistics”
That seems to be the Gorman way – yes.
“Measurements don’t deal in sampling groups, sampling a population, confidence intervals, or any other statistical analysis.”
TG: “None so blind as they who will not see.”
“Measurements result from observations.”
What do you think statistics results from?
“There is no sampling size calculation to obtain a low SEM. That is why the GUM says:”
Firstly – nobody said anything about the SEM. We are talking about Expanded uncertainty and what karl called uncertainty intervals.
Secondly – nobody cares that the GUM gets triggered by the phrase standard error of the mean. It makes no difference what you call it – the statistical basis is the same.
“The primary purpose of statistical parameters in measurement uncertainty is to DEFINE an internationally accepted definition of the intervals to use in describing the dispersion of observations values that can be attributed to the measurand.”
Why do you never actually quote the definition in the GUM. It is not the dispersion of “observations values” that can be attributed to the measurand. You are just describing what Tim insists was abandoned 50 years ago. True value + error. What uncertainty actually describes is the level of uncertainty of your observation, and in GUM terms this is dispersion of values that could reasonably be attributed tot eh measurand.
“Whereas a Type A standard uncertainty is obtained by taking the square root of the statistically evaluated variance“
Gosh, you mean the standard deviation is the square root of the variances. If only you hadn’t thrown away everything you knew about statistics, maybe that wouldn’t have been such a revelation.
“That is the internationally accepted definition of the interval to be used when declaring standard uncertainty.”
It is not an interval. You never get that. The standard uncertainty is equivalent to the standard deviation. You can use it to define an interval, based on the expanded uncertainty. But the standard uncertainty is just a positive number.
“The standard deviation of the mean can only be useful under strict conditions, primarily observing the EXACT SAME THING.”
And you are back to making things up. You can take a mean of different things, that mean will come from a distribution that can be estimated, and can be characterized by the standard error of the mean, or if your prefer the standard deviation of the mean.
If you use your own definition of uncertainty, then that is what you need if you want to describe the distribution of expected observations of the mean. If your standard deviation of the mean is 1, you expect around 95% of all future sample means to be with ±2 of the population mean.
But as always, if you don’t want to treat the mean as a measurand – stop using the concepts of measuremnt uncertainty on it.
“Why do you never actually quote the definition in the GUM. It is not the dispersion of “observations values” that can be attributed to the measurand. You are just describing what Tim insists was abandoned 50 years ago. True value + error. “
The dispersion is *NOT* “true value + error”. Have you forgotten that some physical scientists are recommending the use of just the uncertainty interval with no mention of an estimated value? As usual you are trying to conflate “stated value” with “true value”. They are *NOT* the same thing.
“You can take a mean of different things, that mean will come from a distribution that can be estimated,”
Of course you can. That doesn’t imply the mean has any physical meaning. A conglomeration of the heights of Shetland ponies and quarter horses is a data set of different things. You *can* calculate the average height and can estimate what the distribution is BUT WILL THEY ACTUALLY MEAN ANYTHING PHYSICALLY?
The same logic applies to temperature measurements.
They will never acknowledge this because it is a threat to their irrational worldview.
At this point, their relentless pushing of this anti-JCGM noise is nothing but Fake News propaganda.
“Except there was warming in maximum temperatures according to CRN.”
Since 2017 USCRN has been on a cooling trend.
So the minimum temperature must be decreasing faster than the maximums are increasing..
So actually… It is getting COLDER.
Here is a smattering of monthly averages from across the U.S. They surely don’t show what your graph shows.
Perspective is in the eye of the beholder. Your graph has a scale of 0.2 degrees. No one, and I emphasize no one, can feel that kind of change. These graphs show the values that humans feel and can sympathize with.
There is obviously little to no change in the months with high temperatures, i.e., summer. There is an apparent increase in the minimum temperatures in some locations. I don’t know of anyone who would complain about a 2 or 3 degree change in winter low temperatures. This would reduce heating costs and energy usage. Those aren’t bad things!
“They surely don’t show what your graph shows.”
because my graph is showing annual global anomalies, and not monthly seasonal changes in tiny parts of the world.
“Perspective is in the eye of the beholder. Your graph has a scale of 0.2 degrees.”
it shows around 1°C of warming. If you don’t think that’s noticeable as a global average, you need to explain why people are so worried about a little ice age. Half the time I’m being told how great it is that we are warmer than in the 19th century, and the rest of the time I’m told the change is not noticeable.
Your graph is showing a contrived value of an annual ΔT without a common baseline.
My graphs show the absolute temperatures in monthly averages. They don’t need a baseline or a ΔT to display changes or the lack thereof.
One can easily see that the low temperature months have been changing which will make your ΔT larger. FYI, averages hide what is actually changing. It is why variances for your anomalies are important.
Alternatively, ignoring averages makes it a lot easier to cherry pick the times of the year and locations which are showing the least warming.
Getting back to CRN, here’s the maximum temperatures for summer. Warming rate is still 0.38 ± 0.46°C / decade. Not significant, for the usual reasons, but still most likely warming.
The graph also makes clear the problem of blaming this all on El Niños. The biggest outlier was 2012, which was following a moderate La Niña, and 2023 was actually a bit cooler than the previous La Niña years.
Actually, picking just CRN is cherry picking to a certain extent. It does have a limited time span 2005 – 2024 is 19 years.
If you believe NOAA data, then several of the graphs I posted started at 1900 and the others in 1950. That’s 124 years and 73 years, much longer time periods.
I was responding to the claim made about CRN.
It’s the usual distraction here that whenever I demonstrate a claim is wrong, using exactly the same data,I will then be criticized for cherry-picling that data.
The claim that I’d made here is that CRN data is the state of the art data, that demonstrates all other data sets are wrong. When I used NOAA data, I’m told it’s invalid as it’s all caused by urban warming. When I use CRN data I’m told it’s too short only being 29 years long – which is point I keep making.
“I don’t know of anyone who would complain about a 2 or 3 degree change in winter low temperatures.”
You’ve obviously never spent time on any UK weather forum.
So as this demonstrations was so popular, I though I’d make a few additions.
I’ve changed the ENSO averages. Before I was just using the average from the previous year, now it’s the average of the last half of the previous year, and the first half of the current year. This fit’s in better with the ENSO cycle.
I’ve added TSI as an independent variable, though I’m not sure if using individual annual values is the best option. This did mean dropping the first couple of years as the data I’m using only goes back to 1882.
And I’ve added measurement uncertainties – using the values given by GISS. Before anyone whines, this is assuming the errors are random and Gaussian, that’s the way the package handles them.
Non of this has changed the effect of CO2 to any great extent. The model still gives the best coefficient as 2.42, with a 95% interval of [2.32, 2.52]. That’s in °C per doubling of CO2.
Estimated R² is 0.949 ± 0.004.
Nice work. You may already know about this, but there is an aerosol optical depth dataset that is current through 2023.
https://github.com/ClimateIndicator/forcing-timeseries/tree/main/output
Complete with ±200mK “error” bars.
Absurd.
I can make them bigger if you wishbut claiming they should really be ± 15°C seems the real absurdity.
And once again the adage is confirmed, bellcurveman must have the last word.
What a strange person. Has to put in some cheap shot every comment, yet the whines if you respond by claiming you are just trying to have the last word. Let’s see if hetries yo have the last word this time.
Thanks.
I am using that data for these models. It’s strange that so far nobody has even asked me to supply details if the data sources or the methods. Just lot’s of assertions about what it doesn’t say.
“Before anyone whines, this is assuming the errors are random and Gaussian, that’s the way the package handles them.”
It’s what climate science always does. That way they can assume it all cancels.
He can’t see the forest for all the trees he is standing in.
You mention forests. My guess is that none of the CAGW advocates have ever used a compass to navigate from Point-A to Point-B in a forest over a distance of several miles.
They would soon figure out that measurement uncertainty in reading the compass would almost always cause them to miss Point-B where their life-saving infrastructure is. And that’s not just because of random error in reading the compass. Betcha none of them know what the systematic bias is that goes with doing such navigation.
If I read this correctly your title says -Giss Annual Anomaly.
Looking at the graph, this shows in 2000 there was an annual anomaly of 0.5°C. And, by ~2030, the annual anomaly will reach 1°C per year.
Do you really believe that the global ΔT is growing at 0.5°C per year? If so, you have reinvented the hockey stick in spades.
“Do you really believe that the global ΔT is growing at 0.5°C per year?”
If by ΔT you mean the annual anomaly – absolutely not. That’s absurd That would mean 15 degrees of warming between 2000 and 2030. If you are eyeballing a change of 0.5°C between 2030 and 2000, then that would be a change of 0.017°C per year, or 0.17°C / decade, which is in line with the current rate of warming. In fact the trend since 1979 is 0.19°C / decade.
But, and I can only keep repeating until it penetrates, the point of the model is not to predict the temperature in 5 years time. At best it will predict what the average temperature would be with an uncertainty for the individual year of around ±0.3°C. And to do that you would need to know the ENSO conditions for the year, along with all the other variables.
It’s not what I believe. It is what your graph shows. It is titled – GISS Annual Anomaly. That is year over year. 2001 is 0.6 higher than 2000. 2002 is 0.7 higher than 2001.
You need to label it as “GISS anomaly over a baseline temperature of XX°C.
Sorry, My fault. I keep forgetting how immensely ignorant you are. I should have remembered you were the one who keeps insisting an anomaly is a rate of change. It’s another one of those definitions that only appears in the Gorman dictionary.
I did try to get you to explain exactly what you meant by anomalies being a rate of change, but it never occurred to me that you literally thought it was the difference between one year and the next. I’m really sorry if I credited you with too much intelligence, assuming you along with the rest of the world would understand that Annual Anomaly meant the average of all the anomalies over a year.
Since 2017 USCRN has been on a cooling trend.
So the minimum temperature must be decreasing faster than the maximums are increasing..
So actually… It is getting COLDER.
Nope. USCRN since Jan 2017 shows +0.93F per decade warming.
You really do just make it up as you go along, don’t you.
2…
The Chief Trendologist has now weighed in, as required for any article with delta-T versus time graphs…the rest will follow suit no doubt.
Observe the subtle upvotes and downvotes, detached from context, happening throughout the day.
Oh noes, a down vote. Thoughts and prayers.
Bellman,
Why do you assume that all measurement uncertainties are random, Gaussian, and cancel out?
I don’t. Next question.
So you agree with Pat Frank when he said your chart lacked physical meaning? Additionally, do you concede that your earlier assertion about estimating measurement uncertainty using the variance of residuals from a best-fit line was incorrect?
How can I agree or disagree with Pat Frank when he hasn’t said what he thinks is physically impossible about the model? If it’s just down those assumptions then I would have to disagree with him. I disagree with him on lots of other things so that wouldn’t be surprising, but I am hoping he has a better explanation this time.
Regarding your comments about Taylor’s use of linear regression to estimate measurement uncertainty, then you are making all the same idiotic claims Tim Girman keeps making. You just don’t understand the point of assumptions in a mathematical model.
Assuming for the sake of simplicity, does not mean you believe those assumptions to be correct under all circumstances. You can start with a reasonable but simplified assumption about Gaussian distributions, but then go on to make different assumptions. With Mont Carlo methods that has become much easier.
My model assumes Gaussian distributions, but I’ve also used student and skewed normal distributions. It makes very little difference to the result.
Bellman,
Monte Carlo simulations are meant to model systems where uncertainty is represented by random variables. However, in the case of air temperature measurements, uncertainty predominantly arises from systematic factors.
Then you need to identify what those factors are. Not just lump them into implausible monthly uncertainties.
Rejecting data just on the assumption that there might be an unknown systematic factor, just allows you to reject any data you don’t like.
I’ve said many times that it’s posdiblein fact almost certain that at least some data will have systematic effects that will affect the trend. Thst follows from how much the trends change between different data sets and versions. That’s one reason why I say that I think all uncertainties are random is a lie.
But my approach is to take each data set as is, whilst acknowledging that different data sets will lead to slightly different conclusions.
One factor that has been explained to you before is calibration drift. It causes a consistent deviation in the measurements over time. When you calculate the mean of these biased measurements, the mean will also be biased and not accurately represent the true mean of the underlying variable being measured.
It also means any anomalies calculated from those results will be inaccurate as well.
I don’t know that is necessarily true. But remember, this discussion is not about measurements aggregated from only a single instrument. It is about measurements aggregated from multiple instruments. A systematic factor in the context of a single instrument can behave as a random factor in the context of multiple instruments. This happens because the systematic factors are different for different instruments and thus form a probability distribution. This is an example of a scenario that JCGM 100:2008 is referring to in section E.3.6.
So, if we have 50 thermometers all experiencing calibration drift, are you suggesting that 25 would drift positively and the other 25 negatively, effectively cancelling each other out? That seems improbable.
Maybe. Maybe not. I’m speaking more generally though. In reality there are going to a component of the individual systematic biases that are common among all instruments and a component that is different. It’s only the component that is different that would be behave randomly in the context of multiple instruments. [Hubbard & Lin 2002] provide a real world quantification of this. For example, each ASOS and CRS instrument has it’s own unique systematic bias which forms a reasonably symmetric distribution (less so for CRS), but there is a common component of each type that causes ASOS to read about 0.25 C lower than a CRS broadly speaking. Converting to anomalies would remove part of the +0.09 C and +0.34 C bias relative to the standard for ASOS and CRS respectively and thus resolves some of the lingering systematic bias that may contaminate a spatial average. But there is still an issue in the temporal domain in which a station may transition from CRS to ASOS creating a downward breakpoint in the temperature. Do this over a long period of time for many stations and you introduce a low bias in the assessed trend of the dataset. Dealing with these systematic biases that propagate through to the final result is difficult at best.
Lots of hand-waving, to smokescreen over the usual nonsense.
I read bdgwx’s citation, and I don’t know about your stance.
I think Hubbard and Lin’s reliance on solar radiation and wind speed data from a single location during a single summer in a controlled environment restricts their model’s broader applicability.
Especially when you are considering diverse instruments measuring in uncontrolled environments across varied physical terrains, each subject to different climates like Arctic cold or Saharan heat.
This is his abject ignorance of uncertainty showing — uncertainty is what is unknown, and unknowable. He has no way of knowing if this is true or not because each measurement system has its own unique measurement uncertainty.
They need systematic uncertainties to cancel in order to justify the tiny uncertainties they claim for air temperature averages, so they are forced into attempting to explain how it happens. Thus the hand waving about how it might occur.
This is not sound metrology practice. Combining unknowns cannot increase knowledge where there is none.
I agree with you that Hubbard and Lin 2002 is not the be-all-end-all text on temperature measurement bias. Far from it. I cite it only as a demonstration of a fundamental concept. That concept being that instrument bias is a combination of a component that is different for different instruments and a component that is the same across some instruments.
Now you are backpedaling — clearly you push this systematic –> random cancellation nonsense just like Stokes, BOB, etc., as justification for ignoring all systematic uncertainty.
It goes right along with the Fake Data practices you endorse.
Yep. It appears that most of the CAGW fellas are blackboard geniuses with little real world experience in measuring things.
As in — none at all.
What do you mean by that?
Converting anomalies does not remove any bias. Uncertainty is additive.
I mean that it depends on the nature of the bias. If in the context of multiple instruments the bias is different for each instrument (forms a probability distribution) then you get some cancellation. If, however, it is the same you don’t get any cancellation. In practice the correlation r(a,b) between two instruments a and b is neither 0 nor 1 so it ends up being a blend. Note that my use of the correlation r is consistent with the definition and usage in JCGM 100:2008.
Converting to anomalies definitely removes some bias. Simple algebra is all you need to prove it. Let Mi be a measurement, Ei be the random component of error for Mi, and B be the systematic component of error that is invariant of i. The true value is related the measurement via Mi = Ti + Ei + B. The anomaly baseline is Ab = sum[Mi] / N. Then the anomaly itself is Ax = Mx – Ab. Using substitution we have Ax = (Tx + Ex + B) – (sum[Ti + Ei + B] / N). Notice that we can factor out the B terms resulting in Ax = Tx + Ex – sum[Ti + Ei] / N + (B – B). And finally notice that the two B terms cancel leaving the anomaly with only the random component of error.
The cancellation of bias via anomalization is powerful because we don’t even need to know what B is. But keep in mind this only applies to a bias that is invariant and thus can be algebraically represented with the single term B. This is why I say only some of the bias cancels.
That is a courageous assumption, Minister.
And he has no way of quantifying the fraction.
It’s not an assumption. It’s the way the algebra works out. If the bias isn’t invariant then you have to use a term with a subscript like Bi meaning Bi for each i could be different and thus would not cancel at the end like how it would if it were invariant.
Yep. It only works if all the Bi are known and identical.
With temperature anomalies, you’re assuming that the systematic component across a succession of thermometers of different types is time-invariant at the scale of centuries.
Being known isn’t the issue. The algebra doesn’t care if it is known or not. The issue is in regards to whether they are identical. Also, there are different techniques other than anomalization, like pairwise homogenization, for handling biases when they might not be identical from measurement to measurement.
Yes and no. Yes, for trivial anomalization. No, for the more advanced anomalization where breakpoints are identified first and then anomalies are formed per breakpoint like what Berkeley Earth does. But even trivial anomalization has a benefit because any shared bias whether known or not cancels out. The rub is that shared bias is not always the dominant source of bias.
Fair enough. I shot from the hip, there.
Fair enough. I shot from the hip, there.”
Now I remember what I had in mind.
How do you know it’s identical if you don’t know what it is?
That’s how it is defined it in the model. Mi = Ti + Ei + B. Ei is the component of error that is different for each Mi and B is the component of error that is identical. Note that Ei + B is the total error in the measurement Mi.
Superb non-answer!
You have no way of knowing the fraction of B that cancels against that which does not.
And once again, error is not uncertainty…
Well, that isn’t circular, is it?
I don’t think so. What is it exactly you are challenging with the equation?
You don’t know how much cancels if you don’t know what B is.
All you know is that some undefined quantity (B) cancels, and tat B is the undefined quantity which cancels.
Essentially, it is an unquantifiable constant which didn’t need to be in the equation in the first place.
Knowing B isn’t always the point. The point here is that converting to anomalies can remove bias whether it is known or not.
“With temperature anomalies, you’re assuming that the systematic component across a succession of thermometers of different types is time-invariant at the scale of centuries.”
What converting to anomalies is doing is reducing the apparent dispersion of average temperatures due to sampling differences.
Ideally, the same set of stations would be used for each time period, in which case absolute temperatures could be used.
p.s. Converting to anomalies is not removing bias, it is offsetting the base temperature for each station by a site-specific offset to bring them to a common zero.
Even if possible (its not) there is still an issue because that same set of stations still uses different instrument types. So comparing them in absolute terms is still contaminated by the bias arising as a result of the type of instrument.
That argument seems like semantic pedantry to me. Afterall bringing them to a common zero removes the error (aka bias) between them..
Pure bullshit.
You only hope and wish this were so.
“Even if possible (its not) there is still an issue because that same set of stations still uses different instrument types. So comparing them in absolute terms is still contaminated by the bias arising as a result of the type of instrument.”
Your entire post is based on the assumption that all stations of the same type have the same bias. They don’t. You can’t eliminate the bias by using anomalies.
Again, ONE MORE TIME.
Calibration drift is a time function. an environmental function, and a manufacturing function. If all identical stations were installed at the same time in a calibrated manner then they will almost certainly drift in the same direction as all of the electronic components suffer heat effects from being used. The *amount* of drift will be different because of other factor such as different microclimates.
This means you CAN’T cancel out the bias and it doesn’t matter if you use anomalies or absolute temps.
The baseline will contain more accurate data from early in the timeline. Later data will have an increasing bias due to calibration drift.
How do you tell whether the change in the measurement is from calibration drift or from environmental change (e.g. global warming).
Answer? YOU CAN”T.
Temp measurement:
Day 1: Tamp = 20C
Day 366: Temp = 21C
Day 733: Temp = 22C
Total change = 2C
Anomaly compared to initial conditions:
Day 1: Anomaly = 0
Day 366:Anomaly = 1C
Day 733: Anomaly = 2C
You get a 2C difference whether you use the absolute temps or the anomaly.
How much of that 2C is from calibration drift and how much from global warming?
But the measurement systems will be operated in diverse temperature environments, so the Type B uncertainties due to operating temperatures not being equal to the calibration lab temperature will all be different.
If the operating temperatures are different then the amount of calibration drift over time will be different as well. Meaning for any specific instrument the reading could be anywhere in the uncertainty interval – or actually even outside it. The Type B uncertainty intervals are not physical boundaries.
Exactly, which is why they often are engineering judgements.
Good, grasshopper.
Oh, dear. We needn’t bother with Kelvin or Rankine, then.
Changing to anomalies simply can’t remove systematic bias. That would require that the systematic bias being propagated in the baseline is the same as the systematic bias in the current absolute value.
You can’t possibly know either systematic bias.
You are espousing a religious belief, an article of faith that is part of your dogma concerning climate science. “All measurement uncertainty is random, Gaussian, and cancels.”
Climate science has somehow figured out how to know the unknowable. Amazing, isn’t it?
Your model is a perfect example of one that is useless in the real world.
You know neither Ei or Bi so you can’t determine how much cancellation happens.
You can’t just assume that B, i.e. the systematic uncertainty, is identical in different measurement devices.
Your total lack of experience in the real world is showing!
Glaring, it is.
At a bare minimum, that introduces threshold selection issues.
You also need to know if the breakpoint made the temperature readings more or less accurate. What happens if the breakpoint happens because an up-wind air conditioning unit is removed? Assuming the new temps are less accurate is unjustified.
I agree that it isn’t a perfect fix if that is what you mean here.
The breakpoint threshold can lead to false positives or false negatives.
The only way to really tell is by Bill’s approach of painstaking examination of the data and metadata. Even that can miss something due to incomplete records.
Incomplete records is the biggest issue. That’s why most datasets have moved to pairwise homogenization or similar methods.
It seems to mostly be a matter of convenience. Just look at the amount of time Bill has spent working back through a subset of Australian station records.
Breakpoints and pairwise homogenisation (yuk) can give an indication of what to investigate in greater detail.
A few missing records will *NOT* appreciably affect the average. It might make a change in the SEM over a small sample like a monthly average. But the average should remain pretty much the same value. This is just an excuse to justify “adjusting” the records in a subjective manner.
Missing records for site moves or instrument changes would tend to muddy the waters.
“Missing records for site moves or instrument changes would tend to muddy the waters.”
Why?
Samples, by definition, are not populations. That means there will *always* be missing records in any sample. That’s the purpose of finding the standard deviation of the sample means, to estimate how precisely you have located the population mean.
What actually muddies the waters is the measurement uncertainty. Climate science always wants to throw that away by assuming it is random, Gaussian, and cancels. *THAT* muddies the waters so badly that it all becomes a cloudy crystal ball that can’t really tell you anything.
Includes loading up the historic records with Fake Data, a practice that bgw promotes.
If you don’t know a site moved from the post office to the grass airstrip, or LiG thermometers were replaced by electronic, there is some quite important information missing.
There are also long-term UHI change effects which are likely to be extremely difficult to track down.
Effectively, you’re either measuring something different, or the same thing under different conditions.
“If you don’t know a site moved from the post office to the grass airstrip, or LiG thermometers were replaced by electronic, there is some quite important information missing.”
This isn’t quite the same thing as “missing records”. This is more to do with calibration and accuracy than with missing records.
Missing records really shouldn’t affect anything as far as a global average temperature is concerned.
If a few stations drop out, even without replacement by new or different stations, the sample is large enough that the average shouldn’t be significantly different. The sampling error might change a tiny bit but if the actual value changes very much then there is something wrong with the methodology that “infilling” or “homogenization” simply isn’t going to fix.
The specific reference was to records of site moves or instrument changes rather than missing data for any particular periods.
Major changes such as those probably should be handled by defining them as a new “site”, while recalibrating a thermometer or swapping it out for a freshly calibrated instrument may not need this.
With 20//20 hindsight, the annual calibration reports for the instruments from each site could have privided some insight into their drift characteristics.
“With 20//20 hindsight, the annual calibration reports for the instruments”
What annual calibration reports? The annual field maintenance checklist for CRN stations does *NOT* include calibration of the temperature sensors, only the rain gauge.
The CRN network depends on identifying “breakpoints” in the temperature record to trigger recalibration or replacement of the temperature sensors. A gradual calibration drift due to microclimate changes or station component drift won’t trigger a “breakpoint”. Thus it is impossible in older stations to separate out temperature reading variations due to “global warming” from growing measurement bias.
That’s why they could have been handy 🙂
This is pure malarky. In calculating a “GLOBAL” average temperature thousands of measurements are involved. Whether the data set is considered to be a sample or is considered to be a sample a few missing records will *NOT* appreciably affect the average value. If they *do* so for some reason then the entire data set is screwy!
Even a few missing records at a single location shouldn’t affect a long term average. Over 30 years, a missing year should *not* appreciably affect the average. The number of data elements is just one less.
This is just one more excuse from climate science useful to justify “adjusting” the record IN A SUBJECTIVE MANNER.
Pair wise homogenization is a joke. All it does is increase the uncertainty in the value you calculate.
You refuse to do the math outlined in the GUM. Every measurement has uncertainty even if it must be estimated as a Type B uncertainty. The uncertainty is based upon a certain probability function and the statistical intervals that probability function has determines the uncertainty whether you are using Type A or Type B. They are all considered to be standard deviations.
What you end up with is a random variable where:
Thomogenized(Tstation1, Tstation2, …, TstationN)
The mean of the random variable is:
μ_Thomogenized = (T1 + T2 + …, TN) / N
σ_Thomogenized = (1 / N-1)Σ(TN – μ)²
Get through your head that you need to define WHAT THE MEASURAND IS at each and every stage. Simply calling something an “average” is high school type science.
Each measurand is a random variable containing the temperatures used in the definition of the measurand.
Some of the measurands (random variables) are:
T_max
T_min
T_daily
T_monthly
T_baseline
Yes, even one single T_max and T_min are measurands treated as a random variables when doing a Type A uncertainty evaluation. If you don’t have repeatable measurements of the same thing, i.e., T_max, then you must use a Type B uncertainty.
If you can’t recognize that even a measurement of a single temperature is a random variable with an uncertainty interval around it that is defined by a probability distribution, then simply tell folks that you don’t believe the GUM, NIST, and ISO has anything to offer in terms of climate science’s treatment of temperature data.
Bellman has come out of the closet and said essentially this, he doesn’t believe the GUM.
That would not surprise me. The rest of the world is out of step with the real scientists I guess.
What is it that crazy people say about the world?
He is unwilling/unable to comprehend the basics of subject, especially the difference between error and uncertainty, yet is somehow qualified to tell the JCGM they are full of it.
“he doesn’t believe the GUM.”
Such a weird attitude. The GUM isn’t a religious text. It’s possible to read it critically.
I’m not sure what heretical views you think I’ve confessed to. As always an actual quote would help. I have no real objections to the GUM, I just think it’s ambiguous at times, probably coming from the desire not to upset anyone.
The person who really seems to disagree with the GUM approach is Pat Frank. He wants all the probabastic definitions replaced with “zones of ignorance”. Remember, he accused me of worshiping at the altar of the GUM.
“The person who really seems to disagree with the GUM approach is Pat Frank. He wants all the probabastic definitions replaced with “zones of ignorance”.”
Your lack of reading comprehension skills is showing again.
Pat F. is *NOT* saying the uncertainty interval is a “zone of ignorance” but a “zone of the Great Unkown”.
————————————————-
ignorance /ĭg′nər-əns/
nounThe condition of being uneducated, unaware, or uninformed.The condition of being ignorant; the lack of knowledge in general, or in relation to a particular subject; the state of being uneducated or uninformed.A willful neglect or refusal to acquire knowledge which one may acquire and it is his duty to have.
———————————————————
The fact that one *can* state an uncertainty interval means there is *NOT* a lack of knowledge in general or in relation to a particular subject. The fact that you can state an uncertainty interval means you *are* educated, aware, and informed about the techniques for making measurements.
The only one present in this thread that demonstrates willful neglect or refusal to acquire knowledge seems to be you. You absolutely refuse to study accepted texts on how to handle metrology and depend on just cherry picking things that you think appear to confirm your mistaken beliefs. You simply wouldn’t survive as an engineer, surveyor, mechanic, machinist, etc whose livelihood, and even civil and criminal liabilities, depend on proper application of metrology.
Yep!
Yep!
And tries to paper over his ignorance with long-winded rants and arguments.
At this point I cannot help but suspect he is arguing for the sake of arguing alone. Or on the other hand he could be a True Believer in CAGW. The description used by paul c. to descript Richard G.— Janus — might well be applicable.
“Pat F. is *NOT* saying the uncertainty interval is a “zone of ignorance” but a “zone of the Great Unkown”.”
“Ignorance” is his word. For someone who likes to throw around “lack of reading comprehension’ as an insult, you like to demonstrate your own quote a lot.
Start with his pamphlet that started all this: LiG Metrology, Correlated Error, and the Integrity of the Global Surface Air-Temperature Record
Then look at some his comments to me in the WUWT post.
And here’s a comment from a Tim Gorman in the same thread.
But they aren’t the same. The current month can be years removed from the base line period. Microclimate changes can change drastically during that time. Even over the 30 year baseline period microclimate changes can occur.
If we were talking about 0.01 uncertainty with respect to absolute temperatures , one could consider it neglible. With anomalies, that is significant.
You really need to make an uncertainty budget and show us what you use for various categories.
Here is a link to a sample budget. Which categories would you include and at what value?
https://ibb.co/fC7L6VK
“ The issue is in regards to whether they are identical.”
Bi is a function of time for each individual measurement device.
“Also, there are different techniques other than anomalization, like pairwise homogenization, for handling biases when they might not be identical from measurement to measurement.”
These other “techniques” do nothing but spread the measurement uncertainty around. How do you KNOW that Component-A and Component-B in a pair-wise homogenization aren’t both inaccurate? You can’t eliminate inaccuracy by homogenizing two inaccurate measurements! How do you know that Component-A isn’t dead on accurate while Component-B isn’t? You just spread the inaccuracy of Component-B onto Component-A.
“breakpoints are identified first and then anomalies are formed per breakpoint like what Berkeley Earth does.”
This only works if you KNOW what caused the breakpoint! BE never does KNOW. That breakpoint might actually be the elimination of environmental contamination such as an up-wind air conditioning unit! Making the actual temperatures MORE accurate, not less.
“But even trivial anomalization has a benefit because any shared bias whether known or not cancels out.”
It can only cancel out if one is negative and the other is positive. That’s not likely in most physical objects.
How come you won’t answer the question of whether metal rods expand or contract when heated?
And increase it!
It would expose how he denies reality.
Yet you still can’t breakaway from incorrect thinking about the subject:
Error is not uncertainty, true values are unknowable and of no utility.
And you have absolutely no way of knowing how much, so what good is this?
Even doing a root-sum-square addition is making an assumption about the random error in a measurement, typically that it has a zero based distribution with some negative and some positive error. A non-symmetric distribution of error will give less cancellation than a symmetric one. How many temp measuring devices have “random” error anyway, at least in unit or tenths digit?
Not any modern ones…
“I mean that it depends on the nature of the bias. If in the context of multiple instruments the bias is different for each instrument (forms a probability distribution) then you get some cancellation”
This is only true if you have a bias distribution around a zero center.
Again, do similar metals do different things when heated? Do some metal samples expand and some contract when heated?
If they all expand or all contract then why would one assume that you would get a zero based bias result from different stations? The bias might be different but it can easily be in the same direction – meaning no cancellation at all.
Random does *NOT* mean zero based. It is only when a random distribution has both positive and negative components that you can get any cancellation.
How many current electronic temp measuring devices have significant *random* uncertainty, positive for one reading and negative for another? At least random in the tenths digit?
He won’t understand either question.
This whole procedure hinges on knowing the true value of Ti. You don’t know that. That is why the GUM says:
Read Notes 1 and 2 carefully. See “perfect measurement” and “indeterminate”. That is why the term “true value” is not used. You don’t know its actual value.
Here the GUM gives another description.
You are using old terms that aren’t even covered in metrology after the issuance of the GUM.
Give up trying to show that bias cancels out when using anomalies between stations. It doesn’t. Uncertainties are standard deviations, including things like drift, calibration, environment, reproducibility over time, etc. Calculating anomalies these are additive by RSS.
Random Variable 1 -> RV1
Random Variable 2 -> RV2
μ_anom = μ_RV1 – μ_RV2
σ_anom = √(σ_RV1² + σ_RV2²)
That is the long and short of anomaly uncertainty.
When you average anomalies, you need to define the measurand as a random variable containing the values you are using. The entries in the random variable will each have their own uncertainty which should be added to the standard deviation (standard uncertainty) of the random variable.
See Jim Gorman’s comment.
JG’s post doesn’t invalidate anything I’ve said and it contains incorrect statements. For example, you don’t have to know the true value Ti to know that the B terms cancel. That’s just way algebra works. It may be beneficial to know that the Gorman’s have a history of making algebra mistakes some so trivial that elementary age students could spot them so it would be prudent to not blindly accept the content contained in their posts. [1] [2] [3] [4] [5] [6] [7] [9] [10] [11] [12] [13] [14] [15]
Lookie, bx-whatever posts his inane database of “errors” again.
How lame.
Only if you don’t understand what he wrote—you still can’t make it past error not being uncertainty.
Your use of “B” is as a constant. Ti IS and must be a true value. In the model you are using Ti is the true value whereby an error term and a systematic value, are added to it to achieve a distribution with values that cancel.
If Ti is not a constant, then each Ti can vary and there will numerous distributions which will not allow cancelation..
NIST TN 1900 explains this as an observation equation.
A “parameter” references a “statistical parameter”, i.e., a mean. A single mean of a single distribution.
You will note that this example does not strictly follow the GUM in that it uses a true value and errors.
The real value is in learning that the variance of the values in the random variable that defines the measurand is used to find an uncertainty value.
You should also note that the example uses the assumption the the observations are made under repeatable conditions. This doesn’t follow NIST recommendations in its own Engineers Statistical Handbook. The appropriate condition is reproducibile conditions.
The forumula should be Mi = Ti + Ei + Bi if you are measuring different things using different things. Each Bi can be different and don’t have to be Gaussian (symmetric). Therefore cancellation is not a valid assumption to make.
If you are measuring different things then how do you know what the B values are let alone know they cancel?
Your formula should be
Mi = Ti + Ei + Bi
Ei and Bi are independent and neither have to be Gaussian (symmetric).
Word salad that is meaningless in the real world.
“I’m speaking more generally though. In reality there are going to a component of the individual systematic biases that are common among all instruments and a component that is different. It’s only the component that is different that would be behave randomly in the context of multiple instruments.”
Utter and total malarky. This only shows that you have *NO* experience in the real world.
What happens to metal when it is heated? Do some pieces of similar metal expand while others contract? Are the components in temp measuring stations made of similar metals, e.g. SMD components on circuit boards?
If you believe that similar metals, or any substance actually, behaves differently under the same environments then you truly need some real world experience.
“For example, each ASOS and CRS instrument has it’s own unique systematic bias which forms a reasonably symmetric distribution”
While the individual bias may be different it is quite likely that thy are all in the same direction!
“there is a common component of each type that causes ASOS to read about 0.25 C lower than a CRS broadly speaking.”
Which you seem to realize but are unable to visualize in the real world. If *all* ASOS stations tend to read lower then there is either a calibration problem or the ASOS stations tend to develop systematic biases in the negative direction – meaning they can’t CANCEL! You can’t average readings that are all too low and get an accurate average!
“Converting to anomalies”
Malarky! Anomalies are based on subtracting two values that differ in time impacts. For instance, the value drift of thin-film components is positive over time from heat generated during operation. The drift at 800 hours is less than the total drift at 10,000 hours.
Assume all stations are installed at the same time and are perfectly calibrated. The data taken shortly after installation will be more accurate than later data. When the baseline is calculated using accurate older data the average will be closer to accurate than later, individual data. Thus the current anomaly will appear larger because of calibration drift.
How do you distinguish a growing anomaly over time caused by calibration drift from a growing anomaly due to *real* temperature change?
What is truly ironic is that by admitting there is at least some systematic that does not cancel, he is in effect admitting that their #1 basic assumption is invalid.
If he were honest, he might see that something else is needed beyond error and true values.
Every single uncertainty factor I’ve seen in climate science is based on the SEM, i.e. how precisely they can locate a population average. That tells you *NOTHING* about the accuracy of the population average. To know the accuracy of the average you *must* propagate the measurement uncertainties and you can’t do that if you assume that all measurement uncertainty is random, Gaussian, and cancels as climate science does.
You can distinguish if you’re measuring the same variable repeatedly with the same instrument, but atmospheric air temperature is constantly changing. Do I have this right?
Yes, the sample size is always exactly one, multiple measurements are impossible.
Yes. Temperature measurement are not done under repeatable conditions.
From the GUM:
closeness of the agreement between the results of successive measurements of the same measurand carried out under the same conditions of measurement
NOTE 2 Repeatability conditions include:
— the same measurement procedure — the same observer — the same measuring instrument, used under the same conditions — the same location — repetition over a short period of time.
NOTE 3 Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results. [VIM:1993, definition 3.6]
This is the old multiple measurements of the same thing with the same device by the same person, in a short time.
This just doesn’t exist, and requires an estimated Type B uncertainty must be used for repeatability uncertainty. NOAA has this in their manuals. WMO also has an uncertainty for different classes of stations.
You are correct in that you are continually measuring a different thing. But if the calibration drift is always in one direction then how do you separate that from what the change in the measurand actually is?
Suppose that after initial calibration the calibration drift is 1C per year. Assume the temp you measure on Day1 is 20C. 366 days later you measure 22C. Is the anomaly actually 1C? Acutally 2C? Something totally different because of microclimate changes over the year?
It doesn’t matter if the drift is over a day, a year, a decade, or longer. How do you separate out the time varying systematic bias from the measurand change?
Bdgwx, bellman, stokes, etc all want you to believe that all the uncertainty is random, Gaussian (i.e symmetric), and that it cancels. It doesn’t cancel and what it actually is can’t be known, just as the true value can’t be known. That is the purpose of an uncertainty interval. The uncertainty interval are those values that can be reasonably assigned to the measurand.
This is exactly what he is claiming, whether he realizes it or not.
AFAIK, Stokes is the originator of this systematic –> random transmogrification nonsense, and the rest of them all picked it up and ran with it.
Is this based on your vast experience with real-world metrology?
Not this nonsense, again.
Another mushroom cloud of greasy green smoke.
The biggest problem is you trying to use statistics to explain measurement uncertainty. The reason some pieces of statistical analysis is used at all is to provide a STANDARD method of describing uncertainty intervals that are internationally accepted based upon well-known probability distributions. It allows people making and USING measurement data to exactly understand what other people have done when observing and analyzing measurements.
These statistics-based intervals have nothing to do with traditional sampling of populations. They are not probability, grouped, or any other type of sampling. The GUM makes this clear.
The GUM also defines what should be used for standard uncertainty.
Uncertainty is NEVER reduced by averaging. Measurands are defined as random variables. The data points in a random variable have two uncertainty components:
These uncertainties add via RSS. In other words, the variances add, always. RSS provides some cancelation and that is all that can be expected.
For those who want to divide the combined uncertainty by N, this false narrative needs to be disabused. The partial derivative of (∂f/∂xᵢ) does not evaluate to (1/N). This partial derivative is used to find a sensitivity value for how much each xᵢ affects the total combined uncertainty. This can be seen by the formula for using relative uncertainties with a linear measurement model of similar dimensional measurements. The formula for this is:
Δz/z = √[(Δx₁/x₁)² + ((Δx₂/x₂)² + … + ((Δxₙ/xₙ)²]
Which becomes
Δz = z√[(Δx₁/x₁)² + ((Δx₂/x₂)² + … + ((Δxₙ/xₙ)²]
where
Δz is the combined uncertainty.
z = μ of the random variable of the measurand
x₁, x₂, …, xₙ, = values of measurements
For a more comprehensive example see:
(PDF) Measurement Uncertainty: A Reintroduction (researchgate.net)
Example of “Volume of Storage Tank”
Not totally true. Single measurements at a single station probably is dominated by systematic effects. Things like shelter deterioration, underlying grass changes, fan speed for aspiration, adjacent structures and land use, drift in calibration, etc.
When one begins to define measurands that consist of random variables populated by multiple measurements such as Tmax_monthly_average there are also day-to-day variations in the measurand. It is the big reason that NOAA has ±1°C for ASOS and ±0.3°C for uncertainty intervals of single measurements.
NIST TN 1900, NIST Engineer’s Statistical Handbook, and GUM H.5.2 show how these day-to-day variations contribute to the combined uncertainty.
If I have a measurand of Tmax_monthly_avg_of_multiple_stations, these same criteria apply. Each monthly average from each station contributes to the uncertainty of the total average, as does the uncertainty calculated by variance of the entire random variable.
Anomalies are no different. They inherit the combined uncertainties of the random variables used to calculate them. When averaging anomalies those uncertainties add along with the uncertainty of the entire random variable.
That is one reason why anomalies like 0.015 ±4.5 are meaningless. That is even disregarding the fact that significant figures which preserve the integrity of the precision of the actual measurements is totally ignored in climate science.
No one has ever had the temerity to explain how one can measure the same mass to the nearest gram 100 times and end up with an anomaly value to the milligram. It is only because an average may come out to be 2.0123456789 which allows an arbitrary choice of precision. So an anomaly may be (2 – 2.0123 = -0.0123. Average all the anomalies and you can claim to know the mass as 3 ±0.01. So wrong!
Your trends are really meaningless when attempting to determine what will occur outside the range of time you are using.
The real use of linear regression in the physical sciences is to determine if there is a linear functional relationship between an independent variable and the dependent variable. In other words, if I choose a a value of the independent variable will the functional relationship provide an accurate dependent variable value.
Unless you think a date interval will provide an accurate value of temperature, a trend based on time is pretty much useless in determining the cause of the dependent variable.
That is why pauses or even decreases in temperature over peiods of time along with increasing values of CO2 concentration shows that CO2 is not the determining factor of temperature.
Here is a test for you to do. Look at your trend made in December for the ΔT’s we have experienced in 2024. What is the percentage difference between what the trend showed versus the actual in each month in 2024?
“Your trends are really meaningless when attempting to determine what will occur outside the range of time you are using.”
Just as well I’m not doing that then. But be sure to point this out to Monckton should he return to his reality meter.
“The real use of linear regression in the physical sciences is to determine if there is a linear functional relationship between an independent variable and the dependent variable.”
You still don’t understand the concept of a stochastic model. In most real world uses the point is to determine an estimate of the mean value for a given set of independent variables. The model always includes a random error term.
“Unless you think a date interval will provide an accurate value of temperature, a trend based on time is pretty much useless in determining the cause of the dependent variable.”
You keep ignoring the obvious alternative use of a linear regression on a time series, which is simply to determine a rate of change. You know, like claiming there’s been no warming over a certain period.
“That is why pauses or even decreases in temperature over peiods of time along with increasing values of CO2 concentration shows that CO2 is not the determining factor of temperature.”
Which is the point where you demonstrate you just don’t understand how statistics work. You cannot demonstrate that something is not correlated, just that there is insufficient evidence to demonstrate the correlation. In the case od the pause there is insufficient information because you are using a very short time period, which can easily be rectified by using a longer period. You can also factor in additional factors such as ENSO, and avoid cherry-picking periods.
The trendologists don’t like me, they downvote on sight, heh.
You’re the only Tender Tissues doll who cries real tears about thumbs. You and poorly non de WUWT’d bnice should pay attention to those canonically aligned with you when they tell you that your whining detracts from their posts..
Another blob(tm) word salad, utter devoid of meaning.
Are you going to whine some more about how I don’t post to other web pages to suit your delicate plumage?
It appears that the earlier half of the record is characterized by colder anomalies, while the latter half is characterized by more average anomalies.
Applying a best-fit line isn’t appropriate for a short time series like this one.
“It appears that the earlier half of the record is characterized by colder anomalies, while the latter half is characterized by more average anomalies.”
You mean it’s got warmer?
“Applying a best-fit line isn’t appropriate for a short time series like this one.”
Fair enough – so how do you prove the claim that there is no established warming trend?
The problem with CRN as it’s used here, is there a desperate need to imagine that it somehow proves their has been no warming. You can either claim it shows as much or possibly more warming than other data sets, or you say there is insufficient evidence,m owing to the fact it’s still less than 20 years of data. But claiming that it shows there has been no warming on the basis that it’s too short a time scale is just wishful thinking.
That is because THERE IS NO WARMING except from the El Nino bulge.
As you have shown many, many times, there is absolutely no evidence of any human causation.
You are flogging a horse that is now nothing but skeleton. !
And using El Ninos events to do it.
Just keep using those El Niños and La Niñas to claim no warming.
Poor little child.
You KNOW you have no evidence of human warming of the atmosphere.
Just keep producing no scientific evidence.
Everybody laughs at your juvenile attempts.
And please, give up beating that horse skeleton. !!
I gave you evidence. You whinned about it being the wrong sort. That’s all you have. Denying any evidence that doesn’t support your assumption that all warming can be explained by El Niños.
I’m the one trying avoid El Ninos, to see what is actually happening when there isn’t one.
When you remove the strong El Ninos, and their associated step change…
…. There is basically NO WARMING AT ALL.
You are the one using them all the time to try and show human warming (aka “climate change”)…
… you are just too dumb and brain-dead to realise you are doing it.
By claiming that a very short trend line ending with a succession of La Niñas, somehow shows what happens when you remove ENSO.
You have zero evidence that El Niños cause “step changes”. If you had any understanding you would realise why that mkes no sense physically.
The fact that you always resort to these childish rants rather than engage it’s the argument suggests you know you are wrong, but simply cannot admit it to yourself. You are just not worth arguing with.
So, still absolutely NOTHING but yapping. !!
No evidence, no anything.
There is approximate 36 years out of 45 that are near ZERO trend.
The mindless AGW-cult that you represent like to call them a pause.. and they HATE them.
Anyone who isn’t mentally blind can see from the data that there is a step change at every major El Nino.
If you had any basic knowledge, you would see that the step change makes total sense….. But you haven’t.
You have ZERO scientific argument !
The fact that you resort to your empty zero-evidence rants, shows that.
You are EMPTY… a ZERO… a NOTHING.
That poor dead horse has had enough .. give it up !!
“There is approximate 36 years out of 45 that are near ZERO trend.”
How to demonstrate you know nothing about statistics.
“There is approximate 36 years out of 45 that are near ZERO trend.”
I’ve spent most of my time here arguing that the pauses are figments of a desperate imagination.
“Anyone who isn’t mentally blind can see from the data that there is a step change at every major El Nino.”
If there’s one thing I’ve learnt about statistics is that just assuming something is true because it look like it, is going to lead you astray. Especially if the thing you see is what you want to see.
You can see step changes in UAH data, or you can see a continuous trend with fluctuations. If you want to claim the steo-change model is the correct one you need to demonstrate that statisticaly, not just by shouting.
You also need to explain how your model works and why you think it’s plausible that the earth can just permanently warm in a year rather than just spike.
Now are you willing to debate this, or are you just going to continue ranting at me?
I have posted the graph MANY times that shows the near zero trends for MOST of the UAH data to be an absolute FACT.
You have shown, yet again, that you are mentally blind.
You have presents ABSOLUTELY NOTHING but bluster and conjecture.
Certainly nothing that could be even remotely construed as scientific evidence.
“…an absolute FACT.”
A statement that makes it clear you don’t understand statistics. There are no absolutes. The best you can do is show when something’s unlikely. And you do not do that by ignoring the uncertainty in your self selected pauses. I told you what the confidence interval was with the CRN data over your 7 year period. No way of telling if the warming trend couldn’t have resulted from a cooling trend, or a much larger warming trend.
” the thing you see is what you want to see.”
And you are totally blind to what you don’t want to see.
Inventing things that just aren’t there.
That is what zealots do.
The dead horse still isn’t moving, little child. !
“I’ve spent most of my time here arguing that the pauses are figments of a desperate imagination.”
And FAILED monumentally.. time and time and time again.
Always using El Ninos events to create a trend… thus showing EXACTLY THE OPPOSITE of what you say you are trying to show.
It really is totally hilarious to watch.. !
And bellcurveman absolutely … must .. have … the … last … word.
I’ll repeat part of what I posted earlier.
In order to properly analyze what an underlying trend may be, one must carefully remove outliers. We ran into this all the time in the telephone company when provisioning adequate equipment in switching offices.
El Nino’s fall into the same category. The temperatures during an El Nino DO NOT inform one about normal changes on a long-term basis. Discarding the data from El Nino’s will provide a better view of underlying trends.
El Nino’s are outlier events from the long-term baseline changes.
“In order to properly analyze what an underlying trend may be, one must carefully remove outliers.”
I remember all the times you explained this toMonckton when he was using the outlier of 2016 to create a spurious pause.
Seriously though I don’t think there’s any “must”. Removing data just because it looks out of place can be dangerous, and you need a way of deciding exactly which data are the outliers.
Better options are to leave the outliers and quote the uncertainty. The outliers increase the uncertainty. Or, do what I did above and factor the causes of outliers as desperate independent variables. Finally, make sure you look at a long enough period that outliers are mostly irrelevant.
You are on the correct track.
In order to properly analyze what an underlying trend may be, one must carefully remove outliers. We ran into this all the time in the telephone company when provisioning adequate equipment in switching offices.
Do you provision for once-in-a-lifetime occurrence or even for a decadal occurrence. It was less expensive to develop strategies for curtailing non-essential calling during these events. We engineered using Poisson distribution which went out the window during these occurrences. The data from them was discarded in determining equipment quantities for normal high day calling.
El Nino’s fall into the same category. The temperatures during an El Nino DO NOT inform one about normal changes on a long-term basis. Discarding the data from El Nino’s will provide a better view of underlying trends.
There are more outlier events than just El Niño.
Was the heat wave in ‘X month Y year’ caused by ridging due to meridional flow, or by zonal flow reducing cold air intrusions? These patterns of atmospheric circulation vary depending on the season.
Are these phenomena becoming more or less common? What if their frequencies are changing at different rates or even in opposite directions?
Simply fitting a best-fit line won’t help you understand the important phenomena that contribute to the climate of a particular region.
I used a method of eliminating outliers in a data collection experiment I was conducting (Tuhey’s method?). Basically if the deviation of the value from the mean was more than a certain multiple of the standard error then it was eliminated (multiple depended on number of data points collected). As I recall for the experiment I was conducting points exceeding 3.5*SE were eliminated.
The problem with the El Niño situation is that they constitute a large fraction of the pool.
Since 2000 there have been 8 El Niño years and 11 La Niña years, you’re basically talking about an oscillating system.
The whole universe is a collection of oscillations. People think the earth travels in a flat orbit around the sun, but it doesn’t. The sun pulls the planets through space and those orbits are actually spirals. The whole solar system experiences a new part of the universe every second of every day.
Hey why do El Niños all have a step warning associated with them? What’s the physical explanation for this?
4…
What does this mean?
Quick maths lesson ?? In simple terms…
When you do a linear regression on a set of data, you can extract a term called a “P-value”…
This is essentially the probability of getting that set of data by pure random number generation.
A P value of less than 0.05 is usually considered a small enough probability to say that the data trend is not likely to be from random generation, hence is termed “statistically significant”
Shorter periods and large data ranges usually lead to higher P values.
This may not be a perfect definition, but as simple as I can manage it.
I could also go into upper and lower 95% marks.. which give an error range for the trend line… but I won’t.
+1. This is actually a reasonable answer. Don’t let it be said that I won’t acknowledge a good post even from a poster with a history of grossly incorrect content.
Honest question? Why are you obsessed with imposing a straight line on what is clearly random noise?
I’m not. But I’m this case it seems the best way of texting the claim that thee is no sign of warming in CRN.
You are OBSESSED with denial of El Nino events and their effect…
….. even though you use then in your trend calcs all the time.
Where is the warming in USCRN apart from the El Ninos. ??
You STILL haven’t shown any.
Remember, all three data series show no trend from 2005-2016 and cooling from 2017 – 2023.
Everybody can see that.
Where is the evidence of any non-El Nino warming… let alone any human caused warming.
A TOTAL FAILURE on your behalf..
Something you should be totally used to by now..
I’m denying El Niños now? Pathetic. I’m always pointing out the dangers of basing a short term trend whilst ignoring El Niños and La Niñas. You still ignore the fact that your evidence if no warming is based on ignoring the effects of those.
“I’m denying El Niños now? Pathetic.”
Yes it is pathetic of you.
So you now ADMIT that El Ninos cause warming
Yet that is what you have been DENYING all this time. !
MAKE UP YOUR MIND. !!
There is NO warming except from El Ninos in the atmospheric data…
… and you can not, and have not presented one bit of evidence of CO2 or human caused warming of the atmosphere.
Your whip is broken, so you can stop flogging yourself.
Should “weaking” be “weakening”?
Or is it an American spelling 😉
“It is a damn poor mind that can think of only one way to spell a word.”
― Andrew Jackson
chuckle. as I said.. if that is the way Americans spell it.. no probs.
They spell lots of things “in a strange way” 😉
Was just wondering if it was a typo.
I’m an American and I would spell it “weakening”.
Long Winded Larry takes thousands of words to say:
The temporary heat spike from the 2023 El Nino is fading away
and
Antarctica is not getting warmer
Took me two short sentences
Then Hamlin repeats his classic lie about USCRN, that he will never correct:
“NOAA’s latest USCRN Contiguous U.S. anomaly data (also updated through May 2024) do not support and in fact contradict climate alarmists flawed claims that the Earth is experiencing a climate emergency”
Larry “liar” Hamlin
USCRN has been rising at a +0.34 degree C. per decade rate from 2005 through first quarter 2024. That is over twice as fast as the UAH global average temperature has been rising since 1979 (+0.15 degrees C. per decade. +0.34 degrees C. per decade definitely meets the definition i of a rise rate that Climate Howlers consider to be a climate emergency.
Rather than lying about the trend, as Hamlin repeatedly does, I take an honest approach:
The so called catastrophic warming rate since 2005 was very pleasant and I’ like it to continue. For most people it only means warmer winters. That’s a catastrophe?
If global warming is now considered to be bad news, then global cooling must be good news?
But many centuries of anecdotes clearly reflect that people strongly prefer warm, and hate cold.
Forget the wild guess predictions of the climate in 100 years. We have been living with global warming for over 48 years. And enjoying it.
It might be +1 or +2 degrees C. warmer in 100 years. So what? Warmer than today used to be called a climate optimum. Now it’s claimed to be a climate emergency. That makes no sense.
USCRN is almost exactly the same rate as UAH Land over the same period, even though USCRN has a much larger range.
Comparing different periods is mathematical nonsense.
Both trend calculations are totally dependent on non-human caused El Nino effects.
Good to see you now accept what I said above. USCRN dies show a warming trend. Why do you keep ranting at me for saying it, rather than the author who claimed it did’t exist?
You can debate what caused the warming, but the first step is to not pretend it doesn’t exist.
Warming trend ONLY from the El Nino. As I have shown MANY times before.
Try not to remain totally ignorant.
You still haven’t produced any evidence of CO2 warming or any other human caused warming
You are an EMPTY vassal. !
So you accept there is a warming trend in CRN, as I said.
So you accept that it comes ONLY from the 2016 El Nino bulge.
And has no human causation.
You finally got there.. 🙂
Now you can stop flogging your horse skeleton, and go flog yourself.
Stop lying. If the only way you think you can win an argument is by making up false statements about what I’m saying then it’s clear you do not have a valid point.
You are the one making false statements.
I have shown that there is a calculated trend many times, and then shown it is purely because of the 2016 El Nino.
I even stated as such above.
So you are a bald-face LIAR.
You have still been totally unable to show any human causation whatsoever.
Stop flogging yourself in front of your mirror.
He also whines, a lot.
Says someone who’s just been whining about a single down vote.
Poor bellboy.. still TOTALLY DEVOID. !!
You missed the “heh”, heh. I find the humorless antics of you and your fellow trendologists quite amusing.
Heh.
Try to understand the basics.
ZERO trend from 2005-2016.
COOLING from 2017-2023.
Do you DENY this data? still !!
It can only be deliberate ignorance.
Why did you not read the phrase….
“Both trend calculations are totally dependent on non-human caused El Nino effects.”
Is that because you know it is true ! And have zero counter to the facts.
Because it’s irrelevant to the point I’m making.
This article claims there is no warming. You admit there is warming. That’s the single point I’m making.
You can pretend it happens for any reason you want, and some day you might even be able to come up with some credible evidence that El Niños can cause a 20 year long warming trend in the US, but until that day we can both agree that it is warming.
Still NOTHING but mindless blether…
““Both trend calculations are totally dependent on non-human caused El Nino effects.””
You have been totally incapable of countering that fact.
You have been totally incapable of showing any human causation…
Still deliberately and totally blind to the El Nino transients and step changes, even though they stick out like dog’s *****
It would be funny if it wasn’t so pathetic.
You can also see quite clearly, that the COOLING from 2017 until the start of the 2023 El Nino cancelled out the slight warming bulge from the 2016 El Nino.
As your other graph showed, the 2016 and 2023 El Ninos started at approximately the same temperature.. (ie no warming), and we know there was no warming up until 2016..
The anomaly in 2022 was pretty much the same as in 2005.
ie 2022 was no warmer than 2005.
It is only the position of the 2016 El Nino bulge to the right of the centre of the time series, that give a calculated +ve linear trend.
But of course, a mathematical illiterate trend monkey would not realise that. !
“You can debate what caused the warming”
You can’t…!
You know it is only from the 2016 El Nino.
Why do you continue to pretend it doesn’t exist ??
You have zero evidence otherwise, certainly no evidence of any human causation.
Why do you continue to pretend to yourself that evidence of human causation exists… when you know it doesn’t !!
What your not told. NOAA data has tropics low (July) 24.7°C, 3.4 warmer than northern hemisphere 21.3°C. Yet 15.3°C warmer than the arctic. Making northern hemisphere subtropical.
The southern hemisphere temperature is 14°C warmer than the tropics and 43.5°C warmer than Antarctic. Midpoint does not apply to NOAA data. This is due to high percent of stations being around low 40’s latitude in the northern hemisphere and low 40’s latitude in the southern hemisphere. If midpoint southern hemisphere would be below zero and northern hemisphere around 17°C with tropics being a bit warmer. These are the issues with the data.
Anomalies are manufactured. I go through earths temperature and use the midpoint as average and find the global temperature below normal. As the sun is furthest distance and earths tilt has 68% of earths land facing the sun at 23.5°N absorption of suns heat be 1.06% in summer 362.5 (earth), 317.5 (sun). And 88% in winter. (362.5 (Sun), earth (317.5) w-m2).
One thing I noticed about the maximum anomalies for Northern Hemisphere Land (where most of the world’s people live) is that they seem to mostly occur in winter (January / February / March).
Milder winters are probably beneficial for crop production, or possibly a longer growing season if the mild weather occurs in March. However, we don’t see high anomalies in summer, which would indicate heat waves of increasing frequency, which could lead to drying of crop land.
Exactly what an agricultural study of the U.S. found.
https://www.nature.com/articles/s41598-018-25212-2
A bit late, but here’s my unofficial map of UAH May anomalies.
The southern part of South America and parts of Russia being the big cold spots. But all of the tropics remaining above average.
Here’s the average over the last 12 months. Note, the color scale is different, in order to show more detail.
And here are the regional trends since 1979.