By Roger Caiazza,
In a special to the Washington Post Oliver Uberti opines that “Trust in meteorology has saved lives. The same is possible for climate science”. The former senior design editor for the National Geographic and co-author of three critically acclaimed books of maps and graphics does an excellent job tracing the history of weather forecasting and mapping. Unfortunately he leaps to the conclusion that because meteorological forecasting has worked well and we now “have access to ample climate data and data visualization that gives us the knowledge to take bold actions”.
Uberti writes:
“The long history of weather forecasting and weather mapping shows that having access to good data can help us make better choices in our own lives. Trust in meteorology has made our communities, commutes and commerce safer — and the same is possible for climate science.”
I recommend reading most of the article. He traces the history of weather observations and mapping from 1856 when the first director of the Smithsonian Institution, Joseph Henry, started posting the nation’s weather on a map at its headquarters. Eventually he managed to persuade telegraph companies to transmit weather reports each day and eventually he managed to have 500 observers reporting. However, the Civil War crippled the network. Increase A. Lapham, a self-taught naturalist and scientist proposed a storm-warning service that was established under the U.S. Army Signal Office in 1870. Even though the impetus was for a warning system, it was many years before the system actually made storm warning forecasts. Uberti explains that eventually the importance of storm forecasting was realized, warnings made meaningful safety contributions, and combining science with good communications and visuals “helped the public better understand the weather shaping their lives and this enabled them to take action”.
Then Uberti goes off the rails:
“The 10 hottest years on record have occurred since Katrina inundated New Orleans in 2005. And as sea surface temperatures have risen, so have the number of tropical cyclones, as well as their size, force and saturation. In fact, many of the world’s costliest storms in terms of property damage have occurred since Katrina.”
“Two hundred years ago, a 10-day forecast would have seemed preposterous. Now we can predict if we’ll need an umbrella tomorrow or a snowplow next week. Imagine if we planned careers, bought homes, built infrastructure and passed policy based on 50-year forecasts as routinely as we plan our weeks by five-day ones.”
“Unlike our predecessors of the 19th or even 20th centuries, we have access to ample climate data and data visualization that give us the knowledge to take bold actions. What we do with that knowledge is a matter of political will. It may be too late to stop the coming storm, but we still have time to board our windows.”
It is amazing to me that authors like Uberti don’t see the obvious difference between the trust the public has in weather forecasts and misgivings about climate forecasts. Weather forecasts have verified their skill over years of observations and can prove improvements over time. Andy May’s recent article documenting that the Old Farmer’s Almanac has a better forecast record, for 230 years, than the Intergovernmental Panel on Climate Change (IPCC) has for 30 years suggests that there is little reason the general public should trust climate forecasts. The post includes a couple of figures plotting IPPC climate model projections with observations that clearly disprove any notion of model skill.
Sorry, the suggestion that passing policy based on 50-year climate science forecasts is somehow supported by the success of weather forecast models is mis-guided at best.
—————————————————————————————————————————————
Roger Caiazza blogs on New York energy and environmental issues at Pragmatic Environmentalist of New York. This represents his opinion and not the opinion of any of his previous employers or any other company with which he has been associated.
Anyone who claims that there are no differences between weather forecasting and climate forecasting clearly knows nothing about either.
I agree. One of the big differences is that weather forecasting is focused on exact properties at specific locations at exact times whereas climate forecasting is focused on average properties over broader regional areas spanning long periods of time.
Climate forecasting, starting back over a hundred years, has NEVER had an accurate prediction. None 0f the climate models can make any forecast anywhere near correct. The models try to forecast via large area predicted conditions and try to simulate those areas with mathematical programs.
There is no chance it can be done because the climate generates from the molecular level and builds up to wind, waves, rain, snow, hail, big storms, little storms, etc. That model cannot be evaluated with any computer now, and given the logistics of computing, it would take a computer somewhere near the mass of the solar system, or possibly the known universe, for enough capacity to approach enough capacity to calculate the Earth’s climate model reasonably correctly.
The climate models barely take into consideration the Sun’s effects. Right now the sun has passed the first solar minimum in a Grand Maximum- it’s been virtually without sunspots and and the insolation has reduced several percent over about the last 10 years. In ordinary times the sun would have recovered over the the last five years and started a new Solar cycle. But this minimum, which was accurately predicted by solar scientists, is forecast to bottom out for another 10-20 years and then recover to more average sunspot numbers and insolation by 2050, ending the Grand Minimum.
None of this has ever been forecast in any of the climate models or by the IPCC.
The really grand question is: will the cooling ever end? The ice core research and other stratification studies point a pretty irregular glaciation history. The warm intervals appear to be fairly regular, but, the last ice age ended after some 120,000 years and temperatures have been fairly livable for some 10-15,000years. It appears the period between “our” iceage and the previous one appears to have only about 7-9 degC warmer compared to 10-12 degC recently.
In other words we can’t accurately predict how long our warm weather will last and when and how fast it will fail. But I’ll forecast that we will have at least 50 years of reasonably warm weather.
I think they need to do accurate seasonal and annual forecasts first.
Now where is my Old Farmers Almanac for 2050?
Will there be the 80% historical accuracy forecasts claimed by OldFarmersAlmanac publisher?
<blockquote><i>”… than the Intergovernmental Panel on Climate Change (IPCC) has for 30 years … The post includes a couple of figures <b>plotting <u>IPPC</u> climate</b> …</i> </blockquote>. Simple typo, IPPC vs IPCC
Regarding this morning’s WUWT Post, here are a few mostly appropriate items from my file of quotes & smart remarks:
______________________________________________________________
Mark Steyn famously said: How are we supposed to have confidence in what
the temperature will be in 2100 when we don’t know it WILL be in 1950!!
How can so many people be so easily convinced that events which have always
occurred and extensively documented, be wholly new and unprecedented
Oceanographers don’t know what ocean temperature is today within half a degree,
and don’t know what it was 100 years ago, but they know it’s a tenth of a degree
warmer now than it was then.
The weather is not becoming more extreme, the rhetoric is.
“Today’s scientists have substituted mathematics for experiments, and they wander off through equation after equation, and eventually build a structure which has no relation to reality.” Nikola Tesla
If climate science was settled, the IPCC wouldn’t have to put out
a new report every six or seven years where all the numbers are
changed to make it look like it’s worse than previously thought.
I love these and will file them away for use later
“The weather is not becoming more extreme, the rhetoric is.”
Yeah buddy!!
Us meteorologists don’t save lies by exaggerating and misleading about the weather. Doing that would create “The boy who cried wolf” syndrome.
if you want people to trust you and for them to make SMART decisions that save lives….you have to give them realistic, authentic science/projections. Otherwise, you are causing more harm than good!
Agree, the UK Met Office regularly forecasts extremes of all kinds, usually these never happen. I think it’s part of the climate change hype. There’s a portion of the population that remember the forecast and not the actuality.
But possibly the lack of action in advance of the recent German floods was in part caused by the fact the wolf hadn’t been around for a while despite the forecasts
Mike, although I think the daily weather forecast is doing reasonably well in this era of satellite data, nothing beats opening the window in the morning and looking out.
Ron, for the last few years the 10 day forecast has proven c0nsiderably in the PA area. Granted, the morning pretty well forecasts the weather until noon. After that the real weather rolls in.
The 10 day forecast is usually very good for today, good for tomorrow and then it starts to get thrown off. Big storms, heavy rains, high winds are usually accurate over the full 10 days. But all the “little” things- exact timing, exact temperatures, wind velocity, cloudiness, etc get more fuzzy as the days go on
So I look at the 10-day every 2-3 days..
Depending on the location, studies have shown that many weather forecasts show some skill out to ~5 days, not perfect by any means but better than random. Forecasts decline to random chance in ~10 days.
Giving “realistic, authentic science/projections” just might have to include such things as studying the effects of sunspots and solar cycles, Milankovitch cycles, and other causes of past climate variations rather than ignoring them as irrelevant or claiming they never happened.
True Peter. Also telling them about the benefits of CO2 during this current climate optimum. Telling them about previous warming’s similar to this. Not exaggerating extreme events to try to mislead people into thinking that they have never happened before…..when they did.
Not pretending that global climate models have been accurate in predicting temperatures…..they’ve been too warm. Not pretending that dozens of scary predictions that busted were accurate.
Telling the truth about how small the sea level increase is. Telling the truth about the violent tornadoes decreasing and hurricanes have mainly been responding to a natural cycle.
Its true that the atmosphere can hold Around 7% more moisture because it’s 1 degree warmer and that has increased amounts in heavy rain events…but not by the amount climate change from humans is blamed for.
Also, it’s the coldest places at the coldest time of year that have warmed the most….by most standards ….a good thing. Low temperatures have warmed by twice as much as high temperatures
CO2 is a beneficial gas that’s massively greening up the planet and by itself has increased global food production by 25%. Mowt creatures wouldn’t mind a bit more warmth.
Cold still kills 10 times more humans than excessive heat.
Cold kills 200 times more non human life than heat.
These truths based on authentic science are intentionally hidden and replaced with manufactured realities which serve to propagandist people so they will support the agenda in order to …..save the planet…..which doesn’t need saving from the climate optimum.
No.
Since the world’s governments and the UN have bet our lives and fortunes on the (never close yet) climate models, polar bear populations, coral reefs’ decline, and the disappearance of Antarctic ice, why don’t we just use the modeling to forecast weather?
“We Trust Meteorology To Save Lives. Is The Same Possible For Climate Science?”
Most of the time one hopes the meteorologists have got it right, but they can and they frequently do get it wrong; despite the most modern tools available, like weather radar, super, duper computers etc.
Climate science has made its bed with its ever greater dependence on modelling. Nothing but CMIP models backs up what they claim.
Politicians trust climate science – as a fig-leaf means to an end.
Right or wrong, who cares?
Climate science has made its bed with its ever greater dependence on modelling. Nothing but CMIP models backs up what they claim.
________________________________________________________
Stating the obvious is always appropriate (-:
Added to my files
Most of the time one hopes the meteorologists have got it right, but they can and they frequently do get it wrong
I think that has something to do with butterfly wings flapping somewhere;
That goes back to Lorentz, the guy who discovered that it was chaotic, The climastrologists haven’t caught up yet.
And let’s not forget the unvoiced assumptions regarding weather forecasts. If you listen to the forecast for your region, you know that in any given location it could be off by as much as 3-5F. If I listen to the forecast for NYC, I know to knock off 5F in the winter because I’m 25 miles inland and 300 feet higher. We know how sloppy the temperature forecast is for the weather; now we should assume that climate forecasts will get it to the nearest 0.2F???
Trust in meteorology has made our communities, commutes and commerce safer — and the same is possible for climate science.”
No, it isn’t – climate is weather of the past and there is no hint about weather of the nearer future.
Meteorology hasn’t really made it safer. What it has done is allow current moment by moment descriptions of what is happening by using radar and some satellite pictures. Can they tell us exactly where hurricanes will make landfall (if at all) hours or days ahead of time? How about tornadoes on the ground?
I think he has mixed up forecasting with using technology to monitor weather in real time!
Meteorology can tell us when conditions that are favorable for tornado development will occur. That’s enough to people to be watchful and alert.
Meteorology can tell us the region where hurricane landfall is likely, and that is enough to start evacuations in time to get people out of the way before the hurricane does make landfall.
You don’t have to make precise predictions in order for those predictions to be useful.
Thunderstorms and their strength are well forecasted, not bad for mountaineers, farmer, emergency services etc.
The “German Flood” was forecasted, including the near correct quantities of rain falling up to five days before it happend, that nobody took care is a different story.
Following Kachelmann weather forecasts on twitter f.e. you are up to date and warned about stron weather events as storms, possible windspeed, quantity of rain or snow etc.
Not every forecaster is a good forcaster or respective weather service.
For interested:
https://twitter.com/WeatherdotUS/with_replies
https://weather.us/
https://blog.weather.us/
https://blog.weather.us/it-was-a-swiss-boys-dream-later-a-swiss-meteorologists-dream/
Many flood event are still missed, many likely hood of storm formation are still missed. Personally I truly believe that I had a better idea of what was going to happen 50 years ago than today. That from being caught in to many storm recently that weren’t suppose to be there.
Too many days when thunderstorms “may” occur and don’t, and 20% chance and they do occur. Hurricanes aren’t forecasted, they are “followed”. That’s why there are cones! How many times have forecasters foretold exact landfall 4 or 5 days ahead?
Rain, snow, etc. are followed not forecasted. When was the last time a frontal system was truly forecasted rather than being identified and followed on satellite? I’ve seen too many systems hit Oregon and the forecast shows it going somewhere between the Canadian border and Texas. Sure they can say where it may be tomorrow but three or four days out, pretty shakey.
When was the last time you saw rain forecast with a 90% chance a week or two ahead. Look I’m not trying to disparage meteorologists ability. It’s a tough job. But let’s not oversell the ability to use current models to make long range, very detailed to the day and time of occurrence without using technology to see the movement of already occurring fronts.
Not PC, but hilarious :
Woman who won €1m literary prize turns out to be three men :
https://www.ft.com/content/bc0058d9-940f-41bf-b282-bb62189a274e
Reason?
¨Agustín Martínez, Jorge Díaz and Antonio Mercero said they chose to write under one name, Carmen Mola, because collective work is not as valued in literature¨
I think this applies to numerous Climate articles where an Author apparently goes ¨off the rails¨…
How can we rely on anything that doesn’t look like a hockey stick…_
As long as we are using the data from the 1970s indicating that “A New Ice Age Cometh!” we should be just fine!
Any of this blather about apocalyptic warming is just panic porn designed to instill fear and loosen purse strings so that pols can get what they have always craved: unlimited money and power! Why Gretatards don’t see this has much more to do with their deep, nihilistic religious beliefs than any facts or data!
Refusing to prepare for the next glacial onset shows that their aim is command and control, not concern for the environment!
Nuke the bird choppers! CO2 to 800ppm!!
From the Guido Fawkes website:
TREASURY WARNS TAX RISES REQUIRED TO REACH NET ZERO
The Treasury has finally published its net zero review, and it confirms the inevitable: taxes will have to go up across the board. Who’d have guessed that “you can’t put a single figure on it” meant “it’ll cost a fortune”?
The Treasury warns that “beyond taxation and public spending that directly apply to households, [net zero] will affect households directly through the goods and services they buy and indirectly through the costs on businesses“. Raising the cash will also mean rethinking the tax code, because revenues from fossil fuel related taxes will inevitably drop to zero by 2050. Fuel Duty, Vehicle Excise Duty, Landfill Tax, the Emissions Trading Scheme, and the Carbon Price Floor will all have to be scrapped at some point.
Macro-economic analysis released by Bank of America says that to achieve net zero globally will cost $150 trillion in capital investment by 2050, an amount so colossal that the investment bank’s economists say it is beyond the capability of the private sector and taxpayers combined. It will, the economists argue, require central banks globally to undertake massive quantitative easing. Despite that analysis none of the increased public spending in Britain will be funded by additional borrowing, according to the Treasury:
Instead, HMRC is “exploring options to further strengthen the analytical approach to monitoring, evaluating and quantifying the environmental impacts of tax measures”, like introducing a plastic packaging tax. “Overall, a combination of tax, regulation, spending and other facilitative levers will be required.” In other words: brace yourselves.
https://order-order.com/2021/10/19/treasury-warns-tax-rises-required-to-reach-net-zero/
I hear that U.S. Senator Joe Manchin says he is not going to support a tax on carbon dioxide.
That’s got to hurt (the alarmists)!
Now where are the alarmists going to get all that money they want to spend?
And as always, the tax rises will result in less government income as people do business in other ways.
Which means net zero is quite impossible.
I see how Numerical Weather Prediction models have improved short-term forecasting. I also see the value in using such models for reanalysis. I especially appreciate the ERA5 product from the ECMWF (the European Centre for Medium-Range Weather Forecasts). The availability of an impressive list of parameters includes bulk energy values on a 1/4-degree grid at hourly intervals.
For example, here is a plot of the Vertical Integral of Total Energy, which I have expressed in Watt-hours per square meter, giving the hourly values for all of 2019 for a gridpoint near where I live. The direct warming effect of a doubling of CO2 from preindustrial times is generally taken as 3.7 Watts per square meter, which can be expressed as 3.7 Watt-hours per hour per square meter. That is vanishingly thin on the vertical scale. (Total Energy includes kinetic, latent, internal, and potential energy due to altitude.)
The rapid changes, reversals, and transformations in total energy (and its components) experienced over a point on the planet are thousands of times greater than the incremental increase in the radiative coupling between the surface and the lower atmosphere.
Given such a needle-in-the-haystack problem, could numerical models ever be used to reliably diagnose or project the expected outcome of such a tiny change in the energy interaction of surface and atmosphere? I can’t see how. So I have no confidence at all in the use of similar models for long-range climate analysis concerning greenhouse gases. None.
I think there is some confusion on what radiative forcing means. That +3.7 W/m2 figure taken from the Myhre 1998 formula is the perturbation on the planetary energy imbalance. It is not the cumulative amount of energy the planet takes up from a doubling of CO2. It can be used to determine the cumulative amount, but it is not the cumulative amount itself. The imbalance only reduces to zero once the planet has taken up energy and warmed sufficiently. Over the last 60 years the planet had to take up 350e21 joules for the atmosphere to warm about 1C to equilibrate about +1.0 W/m2 of radiative force (+3.1 W/m2 of GHG plus -1.2 W/m2 aerosol and land use minus +0.9 W/m2 planetary imbalance). From this we can estimate the cumulative uptake of energy from a +3.7 W/m2 radiative force as about 1300e21 joules. Note that 720000 W-hours/m2 is 1300e21 joules. So if we were to quantify the effect 2xCO2 has using the context of the graph you posted it would be the equivalent of doubling total integrated energy in the atmosphere from 720 kW-hours/m2 to 1440 kW-hours/m2.
It was intentional that I did not use the words “forcing” or “imbalance” or “perturbation” in reference to the 3.7 Watts per square meter effect of a doubling of CO2. Rather, I put the emphasis on the atmosphere (including clouds) as the huge ready reservoir of energy for variable emission of longwave radiation to space and for the formation and dissipation of clouds as a variable reflector of shortwave radiation. I don’t see why an “imbalance” must be the final result, when the output of the highly variable emitter/reflector can be directly measured from space, confirming that these energy outputs (to space) must be from a powered source, not simply a passive radiative insulating layer. You have seen these plots before. (CERES hourly longwave, shortwave, and combined emission/reflection for 2018 at a gridpoint near where I live.) If one only conceives of the atmosphere as a radiative inhibitor to surface cooling, there’s the problem. Emission to space FROM the atmosphere and from clouds is powered by the energy in the atmosphere itself.
That canonical +3.7 W/m2 figure for 2xCO2 is the radiative force. And my primary point is that its effect is not vanishingly small in the context of the vertically integrated total atmospheric energy. It is actually quite large. It would effectively double it if all that energy went into the atmosphere.
I disagree with your primary point, which is the widely held view that one should expect an accumulation of heat energy in the land-ocean-atmosphere system as a result of increased CO2. That would be valid if the atmosphere were static, and the emitter of concern is the land and ocean surface. I see that as a misconception. The variable-altitude emitter is above us, fed from below by a high-performance heat engine coupled with the surface. The 3.7 W/m^2 is indeed significant in the scale of final longwave emission, but vanishes as a distinct trackable energy value in between the surface and space as the atmosphere performs its function as the working fluid of its own heat-engine operation. CO2 and other GHG’s add no energy of their own to the land-ocean-atmosphere system. And assuming there must be a degradation of final average emission to space (until temperature catches up) seems unreasonable to me. The data I have posted here shows me it doesn’t work that way.
I disagree with your primary point, which is the widely held view that one should expect an accumulation of heat energy in the land-ocean-atmosphere system as a result of increased CO2.
It’s not just CO2. Any agent that positively perturbs the planetary energy imbalance will cause an accumulation of heat/energy in the climate system. That is the 1st law of thermodynamics. And you’re the one that originally cited the 3.7 W/m2 as the CO2 effect; not me.
I understand it’s not just CO2, and the 3.7 W/m^2 figure for a doubling of CO2 is widely used, so I used it for comparison. If an imbalance were imposed, sure, the conservation of energy implies a rising temperature as heat energy is accumulated. I don’t see why an imbalance from GHG’s should be assumed. I don’t misunderstand your claim. We disagree.
Uberti says “The 10 hottest years on record have occurred since Katrina inundated New Orleans in 2005. And as sea surface temperatures have risen, so have the number of tropical cyclones, as well as their size, force and saturation. In fact, many of the world’s costliest storms in terms of property damage have occurred since Katrina.”
All simply untrue. Weather forecasters often base their predictions on what happened in the past. Why can Uberti not observe the past before making such errant comments?
Part of the problem is that “years on record” is far short of earth’s climate history.
Not only is it far short of the earth’s climate history, for the vast majority of places in the world, accurate records don’t even go back 100 years.
Even that may be overstating the quality of the records we do have.
Between growing UHI, micro-site corruption, multiple (often undocumented) station and equipment changes. It’s hard to say if more than a dozen or so ground stations have data that would qualify as high quality.
Quality measurements for nearly the whole planet didn’t start until the satellite era and even these don’t go all the way to the poles.
The only way Uberti could make those claims about those 10 hottest years is if he is following along with NASA and NOAA, because that’s what they say.
Uberti couldn’t say that if he used the UAH satellite chart.
Here’s the chart:
No year on this chart is hotter than 1998 until you get to the year 2016. Those years after 2005, barely show up.
If they are going to do this they better get better with their communication. Appears to be a “machine learning model.” Needs translation. Give them credit, last line of the abstract, but appears to be the usual too many assumptions for necessary validation.
“These results suggest a delayed onset of a positive Earth energy imbalance relative to previous estimates, although large uncertainties remain.”
Bagnell, A., DeVries, T. 20th century cooling of the deep ocean contributed to delayed acceleration of Earth’s energy imbalance. Nature Communications. 12, 4604 (2021). Open Access
https://doi.org/10.1038/s41467-021-24472-3
https://www.nature.com/articles/s41467-021-24472-3
This guy Uberti must know that what he is saying isn’t true. He must know if he studied weather at all. the 1930’s were much hotter than now, Tony Heller has pointed this out over and over.
On a global scale it is warmer today than during the 1930’s.
bdgwx,
Just curious as to how NASA GISS, originator of the graph that you presented, was able to establish the plots of global temperatures to an apparent temperature resolution of 0.05 °F during the period of 1880 through 1940. A period when:
a) Thermometers used to gather field data had accuracies/calibrations no better than about ±0.25 °F.
b) There were no global networks of temperature sensors covering most ocean areas on Earth, only sparse temperature records from cargo/passenger ships travelling common routes between international ports.
c) There were no well-distributed land networks for monitoring atmospheric temperatures, and the locations that did provide more-or-less continuous temperature records (large cities, airports, seaports, relatively few remote weather stations) did not have specific, let alone consistent, requirements for thermometer accuracy, time of day/night recordings, and siting requirements so as to minimize introducing temperature measurement errors, such as UHI effects.
d) There were no satellites to enable truly global measurements of temperatures, at any resolution.
The fact that you presented that NASA GISS graph with such dubious data speaks volumes about the credibility of your first-sentence assertion of “On a global scale, it is warmer today than during the 1930’s.”
The lack of true field work and actual measurements in climate science is astounding. They cant even tell us the measurement differences from Stevenson screen to the modern electronic instruments. Those who did such work are being ignored.
Quayle 1991 and Hubbard 2006 quantified the differences. Not only are works like these not ignored, but they were used as the basis for applying adjustments to various datasets when it was once the practice to apply adjustments in a more targeted manner. Today the adjustments are applied more generally by pairwise homogenization.
We are talking about averages, and those have much better accuracy.
We have reconstructions beside direct measurements. Anyway, before a certain time (50s I think) we have higher uncertainty but still we can have excellent estimates.
Reading the above I’m just asking: how come you “skeptics” are always making ex cathedra assertions about how hot it was in the 30s and then giving an a-b-c-d rebuttal of how we can’t know it with a straight face? Please get your bs straight at last.
nyolci posted:
I am found speechless. There is no way to reply in a civil manner to such a statement.
nyolci is right on that point. The uncertainty of the sample mean is lower than the uncertainty of the individual values within the sample. For example, if you have a grid mesh with 2592 cells where each cell had an uncertainty of 1C (1σ) then the uncertainty of the mean of all cells is 1/sqrt(2592) = 0.02C (1σ).
No. Absolutely not and finally, no frickin way pal. You, once again, are conflating 2 completely different things – firstly, the data are the temperatures, not some specific mathematical number and have their own, inbuilt uncertainty values. Now, however you do it, a mathematical average may give you a very narrow mathematical uncertainty range but once you start averaging the data together, the actual instrument/reading uncertainty range multiplies exponentially.
You cannot keep treating instrument data as purely mathematical exercises and not expect us to treat you with complete derision.
This is beyond parody. To quote a classic (Gordon A. Dressler, see above), I’m speechless 🙂
To a statistician that can only think in narrow mathematical parameters it may be a parody. To everyone else living in the real world, it’s how things actually work. I realise it may be worrying for you – after all, having 2 thoughts in that liddle head might cause it to explode? But please pull that liddle head of yours out of your arse (or bdgwx’s) and look up how scientists, not statisticians view uncertainty ranges. Now do you see what I mean about treating Muppets like you with derision?
What formula do you think we are supposed to use to quantify the uncertainty of the mean?
The mean itself is meaningless, so to the uncertainty.
Define what your data is: a population or a sample. Then use the following formula.
SEM = SD / √N
SEM –> Standard Error of the sample Mean
SD –> Standard Deviation (of the population)
N –> sample size
Your use of “uncertainty of the mean” is a colloquial use of the statistical term Standard Error of the sample Mean, e.g. SEM. It kind of indicates that you are unfamiliar with basic statistics and sampling.
Nice ad hominem with nothing to back it up.
“From britannica.com (bold by me),
As you can see the Central Limit Theorem (CLT) both assumes and requires “repeated measurements of the same quantity”. Does a single temperature reading recorded as an integer value have repeated measurements? Nope! Therefore, the CLT can not be used to reduce the errors or uncertainty in a temperature measurement.
Why is this important? It is important because a probability distribution of measurement data with errors only makes sense if it is made up of a large number of random errors from measuring the same thing multiple times. From these multiple measurements, one can determine a “best guess true value” by using the Central Limit Theory.
If you need additional references to prove you wrong, let me know.
No. You cite the history section. That was the motivation. The CLT is concerned with variables of the same distribution, that’s all. Your obsession with this comes from a real and profound misunderstanding. A toy example. Mean=Avg(Vi+Bi+Ei), where Vi is the “real value” of the ith measurement during the day or around a location, Bi is the bias, Ei is the error. Bi is usually independent of time (no drift) independent of Vi, and kinda close to zero in modern instruments. Ei is zero centered, and usually independent of Vi and time (no drift etc). The B and the distribution of the Ei-s are quite well known. So Mean=Avg(Vi+B+Ei) = Avg(Vi) + B + Avg(Ei) = Truemean + B + Avg(Ei). The last term shows that we have very successfully reduced the error term in approximating the mean daily (or provincial or whatever) temperature. Of course there are numberless other factors, like quantization, and we have to account for the fact that the Ei-s have different distributions across different instruments. But this is all manageable and since time of Laplace mathematicians have mastered this field.
You should set the math books aside and walk away, they are not helping you at all.
He’s been told this many, many times but continues to recite the same lies regardless.
And bwx is also a liar.
Hey, admin, this is ad hom, not an assertion made in good faith! Please moderate!
No it is present and past experience of you and bdgwx. The statements you have posted are mostly untrue and/or biased so heavily as to be just as bad. Untrue statements are called lies, you have made such statements, therefore you are a liar: it’s all very logical, you see?
As CMoB might say, stop whining.
Once again, the alarmists go out of their way to demonstrate that they know nothing about science or statistics.
There are 3 requirements for your claim to be true.
1) Need to be using the same instrument.
2) You must be measuring the same thing.
3) Your errors must fall into a standard distribution.
1) Each measurement is taken with a different instrument. Indeed many of the instruments are even the same type. So requirement one is failed.
2) Each measurement is taken in a different place, which by definition means they aren’t measuring the same thing. Requirement 2 failed.
3) No evidence presented that the distribution is standard. Requirement 3 failed.
The uncertainty of the sample mean is only an interval where the sample mean lies. It does not identify the either the accuracy or the precision of the measurements. Significant digit rules should be used to define the absolute precision of any final average. If you use integer measurements, your final answer should have similar precision.
You first need to identify whether your 2592 cells of data are a population or a sample. The way you are using it defines it as the population. The appropriate formula is:
SEM = SD / √N
SEM –> Standard Error of the sample Mean
SD –> Standard Deviation (of the population)
N –> sample size
You simply can not divide the SD of the population by the total number of cells in the population and have a meaningful description of anything. SEM applies to sampling where you take a large number of samples of size N FROM THE POPULATION.
A review of sampling theory and what SEM means can be found by using the simulator at the following website; https://onlinestatbook.com/stat_sim/sampling_dist/index.html
Here is a statement from another website; https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#
The SEM is another name for the standard deviation of the mean of the sample means. The problem you will have if you try to declare the 2592 cells as a single sample of 2592 is that then the standard deviation of that single sample also becomes the SEM and it’s average will be the average of the population. Consequently you end up multiplying the SEM by √N in order to determine the Standard Deviation of the entire population. Another warming is that if your distribution is not normal, then you can’t use it as a real sample. The Central Limit Theory only applies when the sample means have a normal distribution. Since this would be a single sample, it would then also need to have a normal distribution shape.
The GUM allows you to state the uncertainty of a measurement as the Standard Deviation. To achieve a confidence level of 95% you should use two SD as the uncertainty. So plan on multiplying your SEM by about 51 to get the population’s Standard Deviation.
Please note, I am not saying that is the entire uncertainty in measurement. It is only the statistical parameter that describes the uncertainty in the distribution of the data you have.
I know you are. See? Science is hard, that’s why they teach it in universities. You should try. But maybe there’s a field where you can speak a bit. It’s about our ability to know the 30s were warmer than today while, simultaneously, we are unable to know anything with certainty ‘cos of a-b-c-d.
Explain this if you can. It is a screen shot from: https://onlinestatbook.com/stat_sim/sampling_dist/index.html
Do me a favor and multiply each of the sample distribution’s standard deviation by the sample size and see if you get the population’s standard deviation. Does that mean anything to you?
Why should I multiply it? We are averaging here. Furthermore, we don’t directly manipulate the sd, that’s a resultant quantity.
WHOOOOOOOSH
nyolci posted, sophomorically:
Funny . . . on the planet that I live on, and the country that I live in, the teaching of science begins in elementary school, and continues through high school, thus leading into college-level courses.
Your post goes a long way to explaining the apparent difficulty you are having in understanding scientific—as well as mathematical—concepts related to practical use of the terms accuracy, precision (resolution) and uncertainty when discussing measurements.
Oops, I didn’t know that… 🙂 Sorry, Gordy, this was an extremely superficial attack on me.
If I understand correctly, you suggest that because I say “science is hard, that’s why they teach it in universities“, I have problems understanding science. Is this what you claim? ‘Cos this is very silly, your assertion doesn’t follow from my assertion. In short, this is some kinda fallacy, perhaps we should ask the Gormans or Rory for a name, they are always eager to come up with one.
Not worth trying to argue with The Adjusters, I just call them liars and move on.
Liar.
For some reason, nyolci actually believes he’s smarter than your average bear.
Averages can only be better if you are measuring the same thing with the same instrument. Neither precondition exists here.
So proxies are good to less than a tenth of a degree? Are you really clueless enough to believe that?
Once again, the climate alarmist declares that all skeptics must believe exactly the same thing. After all, all alarmists do.
Again, assertions out of your axx. FYI These are not preconditions. Statistical analysis is not straightforward, you have to be careful, that’s true. But you can do these things. Mathematicians devised methods to treat averaging random variables with different distributions long ago. This is covered in the first 2-3 semesters in any STEM course.
It’s not me. Scientists say that, and I don’t have any reason to think they are clueless. I don’t question scientists just as I don’t question the engineers who designed and build (say) the elevator in my house. As for specifically “cluelessness”, please add together your regular assertions and declarations in climate science and your lack of any tertiary education.
“Again, assertions out of your axx. FYI These are not preconditions.”
Bingo. The 0.5 *** mythology perpetuated by these relatively few hardy instatisticate holdouts is beyond belief.
In these fora we have a few self described oilfield pros. If they are truthin’, then they stochastically evaluate literally dozens of geological, geophysical and rheological parameters routinely. Each of those parameters are usually found from a mix of several different instruments/methods. Some better than others. But all have known error bands, and correlations. The resulting reservoir and production models have revolutionized oil and gas field development just within my lifetime….
Happy to hear from any of them telling us about how you must have the “same instrument” to do sophisticated evaluations.
From: http://www.chemistry.wustl.edu/~coursedev/Online%20tutorials/SigFigs.htm
Refute this. How do you average temps from 1930 in integer format and obtain 1/10th or 1/100ths precision.
Averaging doesn’t even address the issue. Averaging only addresses finding a true value when the assumptions of the Central Limit Theory are met. Averaging won’t do it because you don’t have an error distribution with the same basis, i.e., the same thing measured multiple times.
Show a reference discussing significant figures that addresses your assertions.
You have no idea of which you speak. Let’s see some references that explains what you are talking about. The abound all over the internet. Google or DDG the term “standard error of the sample mean” and see what you find. Go to YouTube and search there also. Lots of videos about sampling.
Go here for grins: Standard Error of The Mean (Formula & Example) (byjus.com)
I can’t understand why you are insisting on referencing things that directly contradict your point. The byjus.com is unreachable from the EU but this is from the third or fourth reference: SE = σ/√n.
I’ll bet you can recite Pythagoras too!
You got the formula, now show us whether the temp data is for a population or a sample!
You will find what you call SE is really SEM, e.g., Standard Error of the Sample Mean. And I’ll bet you can’t explain what it denotes either.
The uncertainty is not ±0.05F between 1880-1940. It is much higher than that. See the published uncertainty and Lenssen et al. 2019 for details on what it actually is. On an annual basis it is ±0.15 and +0.10 C for periods around 1880 and 1940 respectively. And I stand by statement. Every dataset I’m aware of unequivocally confirms that it is warmer today as compared to the 1930s. If you know of a dataset that publishes a global mean temperature and which addresses known errors caused by station moves, instrument changes, time of observation changes, etc. that shows a significantly different result please post it.
“Every dataset I’m aware of unequivocally confirms that it is warmer today as compared to the 1930s.”
That’s ridiculous.
I’ve shown you numerous regional charts from all over the world showing the 1930’s were just as warm as today, and they are the best database you can ask for: actual written records of the time.
Instead of saying “I’m aware of” you should say “I pay attention to”. You are ignoring inconvenient facts.
And what do your global charts say?
The charts I see as representing reality say it was just as warm in the recent past as it is today, and that means the Earth is not experiencing unprecedented heat caused by CO2, or any other thing.
Actually it’s more like 3 to 5C. Only a total fool believes that a few hundred thermometers is sufficient to measure the temperature of the earth. Especially when almost all of those thermometers are located in western Europe and the eastern US and Canada.
Can you post a link to a global mean temperature dataset with a rigorous uncertainty analysis showing the ±3 or ±5 C uncertainty. I’d like to review it if you don’t mind.
Simple Nyquist sampling.
I don’t want to make a big fuss about it but this is ridiculously wrong.
What does that have to with the claim that the global mean temperature uncertainty is ±3 to ±5 C?
And do you really think the global mean temperature could be as low as 10C or as high as 20C?
Bzzzzzt—again demonstrating your confusion about uncertainty and error.
Here is an example of uncertainty from the National Weather Service about Liquid In Glass thermometers used at temperature measuring stations. For normal temperatures, they quote a ± 1.0 degree interval @ 95% confidence. (See https://www.nws.noaa.gov/directives/sym/pd01013002curr.pdf). That is saying that there is a 95% chance that the true measurement is somewhere within a 2 degree span of the recorded temperature. Accurate eh?
This is from a 2018 document so pretty new. Check out what LIG thermometers in Centigrade were marked with. How about every two degrees.
Watch this.
https://youtu.be/ul3e-HXAeZA
🙂 And how do you know that? This is your scholarly opinion again, right? Or just an ex cathedra assertion you read in a blog somewhere? This is supposed to be science, but for that, you have to support your assertion unless this is something widely known like the Newton-laws. The above is definitely not, and the actual experts of this field know otherwise.
I mention “resolution” and you reply with “uncertainty”.
Never the twain shall meet . . . and you are obviously unaware of that simple fact.
That’s right I did. I felt that was the most applicable and relevant metric. I’ll be happy to discuss “resolution” with you as well though. Can you tell us your definition of “resolution”?
Sure . . . it is exactly the same as applied mathematicians and practical scientists use.
BTW, as any true scientist will admit, the single most applicable and relevant metric, in reporting numerical results is accuracy, not resolution and not uncertainty. They are all different, or didn’t you know?
No, he doesn’t. As has been proven by his comments on several occasions and will no doubt continue to be proven again and again, ad nauseum. I get the impression that he and nyolci have calculators, printouts or links to various impressive-looking formulae but completely lack any understanding or even the most rudimentary scientific skills.
Wrong, as always 🙂 you’re talented in wrongness.
They know a little math (very little) or didn’t absorb what they learned in classes. Understanding the depths of metrology and statistics is hard. You have to admit there are assumptions you must meet in statistics and what uncertainties are in measurement. I’ll guarantee you neither has taken a surveying class using 1950’s era instruments and tried to decide how far off their measurements could be over a mile distance.
I would recommend they search the internet for surveying uncertainty. Lots of folks get upset when you tell them fences are wrong and you find out it was your mistake!
So are you talking about significant figures then?
That is certainly an issue when you are combining measurements with varying significant digits which is what many of your graphs of temperature do. Showing an anomaly in 1910 of 1/100th of a degree decoded from integer values is a perfect example of ignoring significant digits.
You can’t take a temperature of 75 and subtract a baseline of 72.52 and get difference (anomaly) of 2.48. That must be rounded to 2.
Find Eschenbach’s depiction of the 4 targets showing us “accuracy” and “precision”. The scales might fall from your eyes.
You can’t have meaningful exchange if you don’t use a common language. In this case, the standard terms in the language of risk. bdgwx and nyolci are using previously defined terms, relevantly. You are spouting touchy feely, irrelevant nonsense.
BoB posted:
Sorry, Bob, I can’t see how “the language of risk” (your words) has relevance to a discussion about measurement resolution, measurement uncertainty and measurement accuracy.
Now, you were saying something about irrelevant nonsense . . .
You mean something like this? I can get you some more precise definitions if you don’t understand these.
Jim,
Thanks for an excellent diagram to help those whose comments in this thread clearly indicate a lack of understanding of the terms accuracy and precision (aka, resolution).
Nonetheless, I fear the following will prevail for a few:
“. . .you foolish and senseless people, with eyes that do not see and ears that do not hear.”
https://youtu.be/ul3e-HXAeZA
That’s a bullshite assertion. From 1880-1940 you only have integer values. Just how do you get around not using integer anomalies also. It violates every rule in the book. The only way to justify it is to claim they are only numbers on a number scale with no special significance.
??? You grasp on these things is extremely shaky. Anomalies are just the quantity minus the local average (with a fixed based period). It is completely irrelevant whether we only have integers in the data. You can average integers, right? FYI anomalies are the least of this subject, a basic transformation. If you fail to understand these, you are doomed in any serious debate.
Yeah, I’m not understanding the concern with non-integer anomaly baselines.
This is not a surprise that you can’t understand here.
Noci the Nasty shows his swelled hat size again.
a1) The data from those thermometers were only recorded to the nearest degree.
Yet The Adjusters can create averages in the micro-Kelvins.
That’s not correct. Berkeley Earth reports that even for the annual mean global temperature anomalies the lowest uncertainty is ±0.025 K. Note that 0.025 K is significantly higher than 0.000001 K.
Might as well be nano-Kelvins, still just as meaningless.
Once you start making up the data, it’s no big deal to make up ridiculous uncertainty numbers as well.
Poor sod, he isn’t even smart enough to realize that he failed to address my point.
I point out that individual records were recorded to the nearest degree.
Then you pipe in with the absurd claim that the manufactured data has an unsupportable uncertainty.
Only a total idiot would believe that you could average data that has a resolution of 1C, to get 0.025C.
This is covered under the strange name of “quantization noise” and it’s a really complicated topic. But the short version is that you can. You can average these and get a much narrower resolution for the average.
You are full of crap. Quantization noise has nothing to do with what we are discussing. Quit googling stuff that you think might look sophisticated. I suppose you’re going to try and relate that to physical measurements made by visual inspection. Don’t try it!
Sorry, this is university, at least in my specific case. And “quantization” is the proper term for this kind of problems with resolution (where you only have full C readings). Actually, you’ve given away how ignorant you are in these topics with your rant.
Sorry dude I don’t know what university you went to but temperature readings on a thermometer do not suffer from quantization NOISE. A thermometer, especially an LIG is an analog device that is not restricted to discreet levels. The rounding that takes place is due to uncertainty in the resolution. If you want to characterize the rounding as introducing quantization noise go right ahead. You will end up in the same place dealing with the uncertainty of what the true measure was. What you won’t be able to do is make the individual measurements any more certain than the uncertainty interval. You can’t sample a continuous signal twice a day and expect to recover a signal at all.
Stop trying to patronize folks that have more experience in the real world than you do. I have put my time in with AT&T and Western Electric learning the first Time Division Multiplexing PBX inside and out. I learned all about quantizing error and Nyquist sampling requirements along with digitizing the analog signals and how it affects the ability to reproduce correct analog signal reproduction. I went to all the maintenance and engineering classes for Nortel DMS TDM switches. I worked with some of the first T1 carrier systems up thru T4. Clock syncs were critical with these.
It’s hard to debate people who have problems with the basics…
The “readings” are what we use, and those are restricted to discreet levels, you genius. Sorry, this is so basic I really don’t understand what your purpose is here. FYI the quantization noise model is the “traditional” treatment of quantization error, ie. they just treat it as an additive noise to the signal (usually a time series). Under certain (and very broad) circumstances, this additive “noise” cancels itself out in the averages, and averages are what we use.
Sorry, this sentence was either messed up or it reflects a fundamental problem with your understanding. “uncertainty in the resolution” in this context doesn’t make sense.
For the hundredth time, we don’t give a shxt for individual measurements. We are concerned with the cumulative stuff. Yes, we can’t make them more certain. But the averages of measurements are extremely good approximations of the respective “true” averages. We are concerned with these. Please understand this at last.
I don’t know whether you have more experience, and frankly, I don’t care. Your knowledge is not adequate in this debate, that’s sure.
Exactly. Somehow when we say uncertainty of the mean they hear uncertainty of the individual measurements. Nobody has ever claimed on here that taking more measurements lowers the uncertainty on the individual measurements themselves. Literally nobody. What is a fact though is that taking more measurements lowers the uncertainty of the average. I’ve had to clarify this dozens of times already with the word average boldened in many cases and yet somehow this strawman will never die.
On October 22, 2021, 7:38 am, bdgwx posted:
The first sentence is what is know as a tautology. Quite simply, one cannot have an average of a single, individual measurement . . . it takes two or more separate measurements to establish an average.
And the statement “taking more measurements lowers the uncertainty of the average” is quite easy to falsify.
Example: I have two independent measurements of the same parameter of 1.5 ± 0.1 and 1.6 ± 0.1 which yields an average of 1.55 ± 0.14 (1 sigma), reportable as 1.6 ± 0.1 (1 sigma) per limitations of retaining significant figures (the uncertainty in the average of the two measurements is an RSS value, base on assumption of normal distribution of uncertainty values since the measurements are independent of each other). Next, I obtain a third measurement of the same parameter with a different uncertainty value, 1.7 ± 0.3. Using RSS-ing of the given independent uncertainties, (0.1^2 + 0.1^2 +0.3^2)^0.5 = .332, and averaging the three individual measurements now yields a new reportable average—again appropriately retaining significant figures—for that given parameter of 1.6 ± 0.3 (1 sigma). It is straightforward to see that in this example the uncertainty has increased three-fold due to the inclusion of just one additional measurement.
He won’t understand…
I have to admit you’re a tough partner for a debate. Not the smart kind but the kind who mixes up Austria with Australia. Okay, clarification: bdgwx was referring to Gorman’s persistent assertion that averaging would reduce the uncertainty of individual measurements not (or not just) that of the average’s. That’s what his sentence is about.
We are talking about the so called Central Limit Theorem, which is a valid mathematical theorem, so you can’t “falsify” it. The nonsense in the rest of your rant is irrelevant.
The central limit theory ONLY applies to multiple measurements of the same thing – which results in random values distributed around a true value.
It does *NOT* apply to multiple individual, random measurements of different things. These kinds of measurements do *NOT* result in random values distributed around a true value. And temperature measurements taken from different thermometers are individual, random measurements of different things.
It seems like all climate scientists, including you, can’t tell the difference between the two different kinds of data sets. All you have is one hammer and try to use it on everything you see, both nails *and* screws.
Once again, if I give you ten 2″x4″x2′ boards and twenty 2″x4″x10′ boards what is the mean of that population? Is that mean a “true value” of something? Do those values represent a random distribution of values around a true value? Does the distribution even begin to approach a Gaussian?
If this kind of exercise doesn’t lead you to understanding the difference between multiple measurements of the same thing and multiple measurements of different things then you are being willfully ignorant. Which *is* what I have come to expect from you and your compatriots.
The CLT is a mathematical theorem, it’s not concerned with measurements in itself. It only requires independence and the same distribution. Of course both can be relaxed and statisticians master these things.
You didn’t even try to read the meaning in this did you? Do you really think that someone thought the CLT was only applicable to measurements? What a joke!
As to relaxing the assumptions underlying the applicability to measurements, neither can be relaxed. If you do so, you will end up with errors that don’t cancel under averaging and the “true value” will contain a systematic error in one direction or the other.
You reveal yourself as a mathematician and not an engineer. Relaxing CLT assumptions may be ok for polling or other inaccurate phenomena and where relative measurement errors have little affect. Do that with physical measurements and you’ll end up with things that won’t fit together, don’t reach a common point, or that won’t perform as it should. If you miss a poll by 1%, so what, you’ll probably be congratulated. Two real physical examples that illustrate the need for attention to uncertainty are the Challenger explosion and lithium battery fires caused by separators not working correctly. People die when engineers make sloppy assumptions of physical quantities. Political polls, not so much.
Hm, you looked like that… 🙂
Huh, again… No, the relaxation is about “independence” and “same distribution”, not “measurements”. And “relaxation” doesn’t mean you can simply disregard these preconditions. In mathematics, for relaxation, you have to meet certain other preconditions, you have to make certain transformations etc, but if these have been checked and accomplished, you can apply (a variant of) the theorem. You may view CLT not as a single theorem but as a family of theorems with different preconditions. This is a classic field of maths, it has a vast literature, please check the wiki page for a glimpse.
Exactly. Relaxation of CLT if done properly is not sloppiness. Furthermore, these things are done today as a matter of routine. The mathematics involved is a tool of engineers, this is not wizardry whether you understand it or not.
You can’t relax those requirements when dealing with measurements and then assume that you can either cancel random errors or reduce uncertainty. Independence and multiple measurements of the same thing are required to cancel random errors.
How do you think you can make a distribution with one single measurement of one thing? Show your math. I can show you textbook after textbook on measurements that say you can’t do what you are saying. Show a probability distribution in an image of a single measurement and show how the error cancels.
“Of course both can be relaxed and statisticians master these things.”
No, they can’t. And no, they can’t.
We have 3 measurements. They are 1.5±0.1, 1.6±0.1, and 1.7±0.3. I generated 3 corresponding true values consistent with the specified uncertainties. I compared the mean of the measured values (1.5+1.6+1.7)/3 = 1.6 with the mean of the true values. I repeated this 10000 times. The errors fell into a normal distribution with σ = 0.110 which is about 1/3 of 0.332.
BFD.
How can you make this claim when you don’t even understand what uncertainty really is?
Exactly. This especially this Gorman guy is very persistent in his misconceptions.
“Somehow when we say uncertainty of the mean they hear uncertainty of the individual measurements”
No, we hear uncertainty of the mean which is DIFFERENT than the standard error of the mean determined from sampling a population.
Have you done the stud wall experiment yet? No matter how precisely you calculate the mean of the random boards you collected it will *not* tell you the uncertainty of that mean. That uncertainty will be reflected when you nail the plywood of the mean height to the stud wall. You will have gaps where the stud wall is shorter than the plywood and is taller than the plywood. THAT is the uncertainty of the mean. If you take more samples from your random pile and build more stud walls and get more sample means in order to get a smaller standard error of the mean, it won’t help you one single iota when you go to nail the plywood to those sample stud walls.
And *that* is what professional engineers, who have PERSONAL LIABILITY associated with their projects, understand about uncertainty that you don’t.
Measuring the same thing multiple times to get a random distribution of measurements around a true value is *NOT* the same thing as measuring multiple things single times which does *NOT* provide a random distribution of measurements around a true value. In the second scenario you *have* to propagate uncertainty properly, especially if you have PERSONAL LIABILITY (meaning your financial situation as well as your reputation) involved. And you can’t artificially decrease the uncertainty by just dividing by sqrt(N). That simply does not apply in this scenario.
Temperature is MULTIPLE MEASUREMENTS OF DIFFERENT THINGS. Until you get that ingrained in your understanding then Pete forbid I should ever use a product designed by you!
“FYI the quantization noise model is the “traditional” treatment of quantization error, ie. they just treat it as an additive noise to the signal (usually a time series). Under certain (and very broad) circumstances, this additive “noise” cancels itself out in the averages, and averages are what we use.”
How does this “additive” noise cancel out? Are you assuming that the “additive noise” is represented by a random, Gaussian distribution around a “true value”?
How does that apply to measurements taken from multiple temperature stations? Do you think the difference in temperature from a thermometer in Kansas City, KS from one in Lawrence, KS represents a Gaussian distribution around a “true value” so that the difference “cancels out”?
If I am building a stud wall from a pile of 2″x4″ boards delivered by the lumber company does the difference in lengths from board to board represent “noise” that somehow cancels out so that when I build that stud wall and attach drywall to it everything will work out just fine?
“The “readings” are what we use, and those are restricted to discreet levels, you genius.”
Resolution is *not* quantization you genius.
” they just treat it as an additive noise to the signal (usually a time series). Under certain (and very broad) circumstances, this additive “noise” cancels itself out in the averages, and averages are what we use.”
You know enough about this to be dangerous. Read further. Look up heavy-tailed noise — i.e. non-Gaussian!
“For the hundredth time, we don’t give a shxt for individual measurements. We are concerned with the cumulative stuff. “
“But the averages of measurements are extremely good approximations of the respective “true” averages.”
And all you have is the hammer of multiple measurements of the same thing generating a random distribution (hopefully Gaussian) around a true value and everything you see you think is subject to being driven by that hammer.
Standard error of the mean IS NOT the same thing as the uncertainty (read accuracy) of the mean. Yet you want to ignore that so you can continue to use your hammer to pound in screws!
A journeyman carpenter or machinist knows more about uncertainty that *you* do. But you won’t accept that because you are so enamored of your hammer! Thank Pete that engine builders in NASCAR or NHRA know more about uncertainty than you do. You simply don’t take eight measurements of eight crankshaft journals, jam them together in a data set, calculate their average, and divide it by sqrt(N) to get some kind of uncertainty figure that you apply to all of the crankshaft journals. That’s a good way to wind up with a piston coming out the side of the engine block!
When you are measuring different things single times there IS NO TRUE VALUE. If I give you ten 2″x4″x2′ boards and twenty 2″x4″x10′ boards do you *really* think the mean value represents a “true value” of ANYTHING? And that is even assuming the uncertainty interval in each board is zero!
There are far more things in heaven and earth than are dreamt of in your philosophy, nyolci.
You want to use your single hammer for everything while not even comprehending that there are ballpeen hammers, roofers hammers, carpenters hammers, and that doesn’t even begin to get into blacksmith hammers (rounding, cross-peen, straight-peen, Swedish cross-peen, etc).
Individual measurements of different things are not a random distribution around a true value. What do you think that means for your assumption that all distributions are Gaussian?
No. It’s been shown (long ago) that in certain (very many) cases quantization noise more or less cancels out. This is something well known in EE, it has abundant literature from the 50s.
Again, a stupid example… FYI we are rarely concerned with the difference here. Gee…
You have a talent for stupid examples 🙂 How about making a long trail from the boards? It would turn out that the average length of the board times the number of boards will very closely match the total length of the trail.
Yeah, sure, that’s why they employ journeyman carpenters and machinists in engineering roles, right? Are you one of them?
“No. It’s been shown (long ago) that in certain (very many) cases quantization noise more or less cancels out. This is something well known in EE, it has abundant literature from the 50s.”
Quantization only applies when trying to represent an analog signal using a digital representation. Temperature measurements are *NOT* an analog signal that gets digitized. They are individual, random measurements of different things. There is no quantization at work. Stop using big words you don’t understand.
“Again, a stupid example… FYI we are rarely concerned with the difference here. Gee…”
It’s *NOT* a stupid example. That is an argumentative fallacy known as Argument by Dismissal. You seem to be very fond of argumentative fallacies. Answer the question. It is a simple question.
The two temperatures are what contributes to the average temperature calculation. Therefore they are *exactly* what we are concerned about!
“You have a talent for stupid examples 🙂 How about making a long trail from the boards? It would turn out that the average length of the board times the number of boards will very closely match the total length of the trail.”
And, once again, you use the Argument by Dismissal argumentative fallacy. If you think the example is stupid then *SHOW* why you think so. Show the math!
And, once again, you show you don’t understand uncertainty. If the measurements of those boards have uncertainty then the uncertainty of the total length will be related to the sum of the uncertainties of the boards. Each board represents an independent, random variable of population size 1. The variance of independent, random variables adds when you combine them. Thus your standard deviation is going to grow when you combine them. This is the exact same treatment that uncertainty gets.
The combined boards will have a wider uncertainty than each by itself. That will be reflected in the stated value +/- uncertainty for the overall length.
Think about it. I am a professional engineer designing a bridge to span a certain gap. I have a worker measure all the girders I have delivered for the project and calculate an average from them. I start the construction team joining girders to build a span length to bridge the gap.
According to *your* logic I can use the average value to determine exactly how many girders and fish plates will be needed for the job.
But lo and behold! The span winds up 1′ short of reaching the piling on the far end! Who is going to be responsible for not taking into account the total uncertainty accrued from the uncertainty contribution of each girder?
If you don’t like that example then try using an example of building a support girder for a house to span from one foundation wall to another in your brand new ranch house with a basement! It won’t matter how precisely you calculate the mean value when the uncertainty propagation doesn’t go your way and you wind up 4″ short of spanning the entire length!
“Yeah, sure, that’s why they employ journeyman carpenters and machinists in engineering roles, right? Are you one of them?”
Do you think the racing team for an NHRA dragster only employs engineers to build their engines? My guess is that you’ve *NEVER* rebuilt an engine. My guess is that your actual experience in real world applications is essentially nil. Engineers don’t measure crankshaft journals to determine if they have to be ground down and the bearings resized. Engineers don’t measure cylinders to see if they have to be ground out and larger pistons installed. Mechanics and machinists do that! Do you think architectural engineers come out to every building site to build girders? The AE’s only specify what the girder has to be made of. It’s the journeyman carpenters that actually take the supplies and build it to meet specifications (e.g. 3 2″x6″ glued together). Do you even have a clue as to how that construction must be done when none of the 2″x6″ boards will span the entire length? Journeyman carpenters do!
Again, you live in a fantasy world where you think the one type of hammer your instructors gave you will apply in all situations.
How the heck did you arrive to this conclusion? No, and I never claimed anything like this. You were fighting your own straw-man all along.
tim: “you are conflating quantization noise with uncertainty.”
nyolci: “How the heck did you arrive to this conclusion? No, and I never claimed anything like this. You were fighting your own straw-man all along.”
Is someone else using your handle?
nyolci: “The “readings” are what we use, and those are restricted to discreet levels, you genius. Sorry, this is so basic I really don’t understand what your purpose is here. FYI the quantization noise model is the “traditional” treatment of quantization error, ie. they just treat it as an additive noise to the signal (usually a time series). Under certain (and very broad) circumstances, this additive “noise” cancels itself out in the averages, and averages are what we use.” (bolding mine, tpg)
Either someone hijacked your name or you have a memory problem! Which is it?
Yep, that was the answer to this gem of yours:
I other words, I pointed out that thermometer readings are quantized. Because “discrete levels” in output is the definition of quantization. And any quantized quantity does suffer from quantization noise, just to mention the self contradiction in your paragraph. See? I didn’t claim (and FYI never thought) they were the same as uncertainty. I didn’t talk about uncertainty.
Okay, I understand how you have misunderstood this. I mentioned the EE technical term “quantization noise”. This must’ve confused you ‘cos it contains the word “noise”. You thought (again: wrongly) that I equated this “noise” with uncertainty. No. Quantization is a factor further distorting readings. I only said that under certain circumstances, it simply “cancels out” in the averages (gives you very little distortion). You can think as if you’re using the actual real values. Ie. this is a further contributing factor to the various errors that eventually gives you uncertainty. Quantization is a complicated topic though, and one should be very careful.
You still don’t have a clue. A thermometer reading is not quantized in terms of making it fit into predetermined values. Readings may be rounded to the nearest increment marking on a device that has been calibrated.
The nearest example of quantizing temperatures is the choice of °F or °C. In other words to divide the interval between freezing and boiling onto a given number of increments.
Show a reference that declares resolution and reading an analog device should be treated as quantization errors and treated mathematically the same. I’ll wait patiently.
+10,000
The fundamental problem there is in believing that Berkeley Earth communicates truth.
You just will not listen to folks who have dealt with measurements in the real world and have had to make things actually work will you? The numbers you are throwing out are not only wrongly calculated as I showed in a comment above, they do not address measurement uncertainty in the values used. They simply treat measurements as numbers on a number line that can be divided into ever and ever smaller pieces with no concern as to their meaning.
A machinist can not take a caliper with a resolution of 1/10th of an inch, measure something (or in the case of temps, many different things) multiple times, find an average and an SD and divide by the number of measurements to get a really accurate number like 0.1 / 1000 = 0.0001. He will end up making parts that don’t fit when you need precision in an actual product of 1/1000th of an inch.
Where do you think all these rules for significant digits and uncertainty came from? They came from the real world and builders, scientists, machinists trying to develop systems where everyone could rely on what was being portrayed when things were manufactured.
Yep, this is the case when quantization doesn’t behave as noise and it won’t cancel out. So this is a bad example.
Have you ever seen a caliper that uses quantization to determine a measurement?
Have you *ever* even used a caliper?
The reading of the caliper depends on the force with which the measurement heads are applied to the measurand. Can *YOU* get that force the same every time you use the caliper to make a measurement? Be it of the same thing or different things? How to you make sure you are applying the same force each time?
The reading of the caliper depends on the measurement heads being 180deg apart when measuring something like a crankshaft journal. Can you make sure you are at the 180deg points on a crankshaft journal for each measurement? How do you do that?
Neither of these is quantization. They are random variables in the measurement protocol. They introduce uncertainty.
What if the measurement heads wear just a little each time you make a measurement? Do you think you will get a random Gaussian distribution around a true value with multiple measurements over time?
How many practical situations can *you* think of where the measurement distribution may not represent “noise”. How about temperature measurements from thousands of temperature stations? Do you think the variations in those measurements represent “noise” that somehow “cancels out” when you jam them together into a data set?
Have you ever seen a caliper reading that is not a quantity limited to x digits, you genius but a Real value (Real as in mathematics)? ‘Cos this is quantization. You don’t report a Real (in the mathematical sense), you report a fixed point number (well, there are tricky quantizers that may have better resolutions, so more digits, in certain intervals but this is beside the point here. Hey, you’re supposed to have been in telephony! A-law and Mu-law are such encodings that have better resolution in desirable ranges! You must know this!).
Or perhaps you don’t know what quantization means? What the hell is your persistent problem with these extremely simple examples?
Good god, you really don’t know what quantization means… I don’t know what your misconception is but perhaps you mix up quantization with Quantum Mechanics. No, this is something very different.
“Have you ever seen a caliper reading that is not a quantity limited to x digits, you genius but a Real value (Real as in mathematics)? ‘Cos this is quantization. “
Uncertainty is *NOT* quantization. Resolution is not quantization.
Quantization is taking an analog measurement and trying to represent it in digital format. When I read an ac voltage on an analog oscilloscope the value I read is based on the resolution of the grid on the screen. The bigger the vacuum tube the larger the grid can be and the higher the resolution can be. Quantization is when I use a digital oscilloscope to sample that analog signal and try to represent it with 0’s and 1’s. If I don’t sample above the Nyquist limit then I get all kinds of artifacts which don’t exist on the analog signal. If I do sample above the Nyquist limit then I *still* only get an approximation of the actual analog signal. That analog signal may have higher order harmonics that will require *extremely* high sampling rates to resolve using 0’s and 1’s. And I will *still* have only an approximation of the analog signal!
If that ac signal is a square wave it will be made up of an infinite number of odd harmonics. How do you digitize such a signal and encompass all the odd harmonics? If the signal is strong enough I can apply it directly to the plates of the vacuum tube display (so I bypass the bandwidth limit of the amplifiers in the oscilloscope) and get a direct representation of the analog signal in all its glory. The only limitation is the response time of the electron beam in the vacuum tube. I can’t do that with a digitizing oscilloscope. This used to be done all the time with higher powered RF amplifiers.
My guess is that you don’t even know why quantization noise sometimes cancels. You’ve just read that it does.
“You don’t report a Real (in the mathematical sense), you report a fixed point number “
I report a Stated Value with an uncertainty interval. Have you *ever* read the manual for a digital voltmeter? Do you know that (in the good ones at least) they include an uncertainty specification as well? Does a digital voltmeter report a Real (in the mathematical sense) number? How can it if it has a built in uncertainty because of the digitization process?
“A-law and Mu-law are such encodings that have better resolution in desirable ranges! You must know this!).”
ROFL!! These are WEIGHTINGS and COMPANDING applied to analog signals to better represent what the human ear hears or to increase signal-to-noise ratio! They are filters, if you will. And they existed LONG before digital techniques were developed (A-law weighting was developed in the early 30’s). They were implemented using standard passive components. The A-law cuts off lower and upper frequencies that the human ear typically cannot hear. Mu-law is a companding algorighm to raise the average power of an analog signal. This was implemented using non-linear analog amplifiers long before digital techniques were used!
My guess is that you don’t even know that for mu-law companding in the networks of today the analog signal is *still* run through a non-linear analog compressor before being “quantized”.
The truly *sad* thing about this is that almost all sites on the internet discussing these techniques only mention digital techniques.
You are trying to teach grandpa how to suck eggs. Stop it.
“Good god, you really don’t know what quantization means… I don’t know what your misconception is but perhaps you mix up quantization with Quantum Mechanics. No, this is something very different.”
Sorry, I *KNOW* what quantization is. Do *you* have a clue as to what Nyquist Law is?
And I’ve never claimed it is. I can’t understand how you could misunderstand this whole debate in this stupid way.
Yep, and I’ve never claimed that. Quantization is the mapping of a real value to discrete levels (essentially numbers). Resolution is concerned with where these levels are.
And you guessed wrong (as always).
“And I’ve never claimed it is. I can’t understand how you could misunderstand this whole debate in this stupid way.”
Did you forget that *everything* you say is saved in the thread?
““The “readings” are what we use, and those are restricted to discreet levels, you genius.”
*YOU* made the assertion. Live with it. Don’t lie about it.\
“Yep, and I’ve never claimed that. Quantization is the mapping of a real value to discrete levels (essentially numbers). Resolution is concerned with where these levels are.”
You just turned around and claimed the same thing over again!
Measurement resolution has nothing to do with quantization!
For my Tektronix CDM-250 digital multimeter:
2v scale
Resolution 1mv
Accuracy: +/- 5% of reading + 1digit
The resolution is based on what it displays, not on the quantization. It’s a 4 digit multimeter and can’t display less than 100mv on the display in the 2v scale (e.g. it can display 1.252v but not 1.2525v). There aren’t enough digits on the display. The quantization, however, is the same for all meter scales. If it wasn’t it couldn’t have a resolution of 100uv on the 200mv scale.
The uncertainty (i.e. accuracy) is the same for all scales, as shown above.
Resolution is what you can READ, not what the quantization is. They *are* two different things!
On an LIG thermometer, resolution is what you can READ, it is not quantization. Surely you have seen different length LIG thermometers. Some are only 4″ long and some are 12″ long. You can have more “tics” on the longer thermometer and thus it has higher resolution than the 4″ thermometer. But that is not quantization.
You are apparently a child of the digital age and, like your statistical hammer, you see everything as a digital hammer. Analog meters seem to be outside the box of thinking that you are trapped in. And you don’t seem to realize that even inside the digital box you are trapped in resolution *still* has nothing to do with quantization – it has to do with what you can READ from the meter! I.e. how many digits the display can give you!
P.S. if you hook the voltmeters positive lead to the negative voltage and the negative lead to the positive voltage you don’t even get 100mv resolution – one digit is used up in showing the – (minus) sign! But again, that has nothing to do with quantization, it has to do with RESOLUTION!
“And you guessed wrong (as always).”
If you *did* know that you wouldn’t have brought up u-law companding as a quantization example. It wasn’t such as recently as 2000 although things may have changed over the past two decades.
See above why this is a misunderstanding on your side.
Again, show us an accepted reference that treats measurement error and uncertainty in a similar fashion as quantization errors.
A-law and mu-law encodings that have better resolution is what you said. I suspect you never even realized that there were analog uses of these laws back in the day of N and ON analog carrier systems or on L-band multiplexed SSB radio for long distance telephony circuits. I told you before that you are dealing with folks here who have done lots of things involving measurements in the real world for a long time. Don’t assume you know more than anyone else. You would do better to ask questions first.
No. I said they had better resolution in certain ranges. An overall better resolution would need more digits (more levels).
Sorry, but I don’t think I have problems with “realization”.
You are still talking in terms of digital encoding of a continuous waveform. Companding an analog signal to both compress the amplitude bot also to limit the bandwidth. This was done long before digital encoding was even thought about. That is where the A-law and mu-law algorithms originated.
You probably have no idea what the capacity of twisted pair wires is per mile for wired telephones or how you load it with inductors (load coils @44mh or 88mh) to help resolve bandwidth problems.
Compandering came about when analog (vs digital) carrier systems came into being. It was necessary to have distinct bands so when you modulate a carrier with up to 12 individual voice signals they didn’t overlap and cause crosstalk between the channels. There are also other engineering reasons that you probably have no idea about.
In case you need some history see this:
https://en.m.wikipedia.org/wiki/Carrier_system
Read this and note the date and the use of compandors:
https://onlinelibrary.wiley.com/doi/abs/10.1002/j.1538-7305.1951.tb01364.x
“Yep, this is the case when quantization doesn’t behave as noise and it won’t cancel out. So this is a bad example.”
How is that caliper any different than thermometers?
Watch this video on uncertainty and what engineers must know in the UK.
https://youtu.be/ul3e-HXAeZA
I have never, ever seen quantization “noise” used as a way to reduce either errors or uncertainty in measurements. Not even when using averaging.
Please provide some kind of a reference to what you are trying to do using quantization error treatment as a way to cancel measurement errors or uncertainty.
Anyone who states that we knew the temperature of the entire planet planet to within a tenth of a degree 70 years ago, much less a140 years ago is either a liar, or a deluded fool. Which one do you qualify as bdgwx?
Between the many problems with the ground based system and the fact that we had between 3 and 5 orders of magnitude too few sensors, such accurate claims are physically impossible.
Berkeley Earth shows about ±0.2 and ±0.1 C for 1880 and 1950 for monthly anomalies. For annual anomalies it is about ±0.12 and ±0.08 C respectively. The other datasets like GISTEMP are consistent with this figures. See Lenssen et al. 2016 and Rhode et al. 2013 for details.
Liar.
Just because BEST violated all the laws of statistics to generate their data, is not evidence that such nonsense is the proper way to handle data.
Do you know what method BEST used to estimate the uncertainties? How does their method compare to the GISTEMP method?
I’m shocked 🙂 How do you know this?
As I posted above, the fundamental problem is in believing that Berkeley Earth communicates truth.
There is no scientific or mathematical support that geographically disperse temperatures measured in 1880, individually or as an ensemble average, had a precision—let alone accuracy—of ±0.2 °C.
But thanks for the good laugh.
There is literally scientific and mathematical support for exactly that. See Lenssen 2019 and Rhode 2013 as examples. I think what you actually meant to say is that you don’t accept it. And that would be perfectly fine. And if you want to convince others of your position you would present evidence with an uncertainty estimate that you feel is more appropriate. We’ll double check it for egregious mistakes and if there are none then we’ll equally weight that estimate with all the others and see how things play out.
bdx — the world’s foremost expert on absolutely every subject.
Hardly. You’ve probably noticed that I post references to publications not authored by me, but from those smarter than I. I’m certainly no expert here.
“Hide The Decline”, as regurgitated by bwx.
In the above article there is this quote from former senior design editor for the National Geographic and co-author of three critically acclaimed books of maps and graphics Oliver Uberti:
In response, there are these facts:
1) From NOAA: “A seven-day forecast can accurately predict the weather about 80 percent of the time and a five-day forecast can accurately predict the weather approximately 90 percent of the time. However, a 10-day—or longer—forecast is only right about half the time.”
—source: https://scijinks.gov/forecast-reliability/#:~:text=The%20Short%20Answer%3A,90%20percent%20of%20the%20time.&text=Since%20we%20can't%20collect,assumptions%20to%20predict%20future%20weather. (my bold emphasis added)
2) Both NOAA and NASA define “climate” to be weather over a specified geographic area of Earth averaged over an interval of 30 years or longer.
— NOAA source: https://www.ncdc.noaa.gov/news/defining-climate-normals-new-ways
— NASA source: https://climatekids.nasa.gov/menu/weather-and-climate/
Mr. Uberti wants people to believe the ability to forecast weather out to 10 days (with a 50% probability of being correct) is somehow the same as being able to forecast climate at least 30 years into the future (again, with a 50% chance of being correct). UNBELIEVABLY SOPHOMORIC reasoning!
Obviously, Mr. Uberti should have stayed focused on what he is knowledgeable about: being a design editor and expert on maps and graphic design.
I wonder if Mr. Uberti was involved in disappering Tibet from the National Geographic maps? The National Geographic dishonestly shows Tibet as part of China.
Two entirely different animals.
Weather forecasts take today’s conditions and then forecast those conditions foreword in time. The most important thing that is needed for an accurate weather forecast, is lots of accurate current data.
Climate forecasts take a set of conditions, and then try to guess what average conditions would be like given those conditions. Current data is utterly irrelevant to weather models. They concentrate on trying to understand the physics of weather and how all the elements interact with each other whenever there is a change in one of those elements.
Atmosphere
Hydrosphere
Cryosphere
Lithosphere
Biosphere
All 5 of these “spheres” interact with each other, and unless you completely understand all of those interactions, your ability to forecast climate is non-existent. (Some want to include the sun in this list, and while the sun is definitely an input, as it’s output will impact the 5 spheres, none of the 5 spheres is capable of influencing the sun.)
Sorry MarkW, there are issues with your statements:
1) “Two entirely different animals” — actually no, both involve “forecasting” and the key is understanding what that word means. Science-based “climate forecasts” DO NOT “try to guess” what average conditions will be . . . instead they a based on mathematical computations of various complex interrelationships (aka forcings and feedbacks) of key parameters to predict how climate will evolve in the future based on conditions (the climate state) that currently exists . . . not that existed 1000 or even 200 years ago. This is pretty much the same process, albeit over much longer timespans, as is used for weekly and monthly weather forecasting using supercomputers. As always, there is the ever-present danger of “garbage in, garbage out” when using complex computer programming to perform forecasting . . . as the IPCC continuously demonstrates par excellence!
2) “The most important thing that is needed for an accurate weather forecast, is lots of accurate current data.” Come on, do you really believe that, say, infinite knowledge of weather on the day an Atlantic hurricane matures to Level 4 would thus enable an accurate forecast for the location and strength of that same hurricane, say, five days later?
3) “. . . none of the 5 spheres is capable of influencing the sun.” This statement only serves to reveal your oversimplification in restricting the understanding of “the physics of weather” (your words) to the five “spheres” that you listed. For all intents and purposes solar radiation energy reaching Earth TOA is the only source of energy that drives both weather and climate. Variation in such energy is not, and never has been, due to anything on Earth driving processes occurring on the Sun . . . instead solar radiation variation at Earth TOA is a direct function of radiation energy having a (Sun-Earth distance)^-2 scaling law and Earth’s changing orbital ephemeris over time scales of years (i.e., the seasons) all the way to hundreds of thousands of years (ref: Milkankovitch cycles).
I don’t agree with you totally.
“This is pretty much the same process, albeit over much longer timespans, as is used for weekly and monthly weather forecasting using supercomputers”
“Come on, do you really believe that, say, infinite knowledge of weather on the day an Atlantic hurricane matures to Level 4 would thus enable an accurate forecast for the location and strength of that same hurricane, say, five days later?”
If you can’t get accurate forecasts 5 days ahead from current initial conditions then the climate models working over a century have almost no chance of putting out a correct forecast. I suppose the probability is not exactly zero but as they say, it’s close enough to zero so as to be indistinguishable from zero.
Future temperature predictions using global climate models claim to know average temperatures to within a degree or even a fraction of a degree, but the models cannot even agree on the actual average real temperature today (not anomalies).
However, even short-term weather forecasting rarely gets time-stamped local temperatures correct within one degree, even just one day in advance. Ten days in advance, precise forecasts are likely little better than a coin toss. Most of the time, forecasts are “close enough” and we don’t really notice or care. I recall from my synoptic meteorology forecasting class taken in the summer term in Texas, the professor noted that one can forecast the same temperatures, wind and rain chances every day for a location in Texas in July and be right 95% of the time. That would make everyone happy, unless it missed forecasting that rare hailstorm or tornado. Such failures have consequences. Forecasting 100, 10 or even 1 year out is a novelty and a fool’s errand, and any “climatologist” or computer-gamer who claims to know is a charlatan and a fake, no better than a 19th century snake oil salesman. To take such claims seriously and make major policy decisions on that basis is fraudulent.
As some in the UN and the climate movement have openly said, this is NOT about climate.
My grandpa always said, look out the window and forecast tomorrow to be the same. You’ll be correct at least 50% of the time!
The king and queen wanted to go fishing, and he asked the royal weather forecaster for the forecast for the next few hours. The palace meteorologist assured him that there was no chance of rain. So the King and the Queen went fishing. On the way, he met a man with a fishing pole riding on a donkey, and he asked the man if the fish were biting.The fisherman said, “Your Majesty, you should return to the palace! In just a short time I expect a huge rainstorm. The King replied: “I hold the palace meteorologist in high regard. He is an educated and experienced professional. Besides, I pay him very high wages. He gave me a very different forecast . I trust him.” So the King continued on his way. However, in a short time, torrential rain fell from the sky. The King and Queen were totally soaked. Furious, the King returned to the palace and gave the order to fire the meteorologist. Then he summoned the fisherman and offered him the prestigious position of royal forecaster. The fisherman said, “Your Majesty, I do not know anything about forecasting. I obtain my information from my donkey.If I see my donkey’s ears drooping, it means with certainty that it will rain.” So the king hired the donkey.And so began the practice of hiring dumb asses to work in influential positions of government. And thus, the symbol of the Democrat party was born. The practice is unbroken to this day.
I could be wrong, but I recall “forecasting” is a profession that has scientific/mathmatical standards and “climate science” provides a “projection” because it can’t meet the standards of forecasting.
But the same issues arise from the IPCC scientists saying what the science shows, and what it doesn’t, while the writers/PR folks write the articles the media uses.
I’d have to see the context under which “forecasting” and “projection” were used to know for sure, but I suspect it may have to do with the fact that climate predictions are scenario based whereas weather predictions are not. Climate predictions are scenario based because they depend on assumptions regarding future human, solar, volcanic, etc. behavior that are provided as inputs. Climate predictions are basically a if-this-then-that type of prediction. I don’t have an issue with using either the word forecast or projection to describe these scenario based predictions myself. I do accept that others might and that’s okay too.
There is no climate prediction.
Oh they definitely exist. IPCC used Representative Concentration Pathways (RCP) out through 2100 for several years and just recently switched to Shared Socioeconomic Pathways (SSP) for AR6 and again through 2100. The earliest climate prediction I’m aware of comes from Arrhenius in the 1890’s in which he predicted the Earth would warm/cool when atmospheric CO2 concentration increased/decreased. Climate predictions definitely exist.
Climate lies.
It is not possible to make deterministic, definitive predictions of how … use of the SRES scenarios (IPCC, 2000) developed using a sequential.
Source:
Chicken entrails.
Unfortunately all of the predictions are based on scenarios that are far warmer than reality. Given that their premise is false, then the scenarios will never be of any use to us at all, will they? Why aren’t there any scenarios that are based in reality and give a feasible projection of the future climate?
As he seems to confuse data with computer model outputs he is not qualified to comment. It is obvious what the difference is between validated short term weather forecasts where accuracy and error margins are known from experience, and 50 year climate forecasts where there has been almost no validation, but what exists has routinely and repeatedly invalidated the models.
CMIP5 reasonably reproduces the overall trend. The BEST trend is +0.087 C/decade vs the CMIP5 trend of +0.079 C/decade over the period 1880 to 2020. The RMSE on a monthly and 13-month centered mean basis is 0.165 C and 0.116 C respectively. So while model predictions are not perfect I don’t think it is fair to say our models are invalidated either. My data sources are the KNMI Explorer here and the Berkeley Earth dataset here. If someone knows of another model that they feel may be more skillful the CMIP I’d be happy to plug the data into my spreadsheet and compare it to observations like I’ve done with CMIP.

For the majority of your graph it was not forecasting at all but curve fitting to known data. A totally different argument. In addition you chose to ignore all the other data sets on global temperature that actually have broad coverage and minimal in-filling which show the CHIP5 models, with few exceptions, consistently over-predicting warming. They fail in predicting a tropical hotspot that doesn’t exist in observations. They fail in almost every regional prediction of weather phenomenon. They can’t get precipitation, drought, atmospheric humidity, or the rate of warming correct over the forecast period. They can’t predict the major drivers of global temperature over medium to long term such as ocean cycles, cloud cover, rain bands, albedo. I don’t buy your weak arguments.
This isn’t a curve fit. It is the global mean temperature as computed from the global circulation model grids. I don’t expect you buy what I presented no questions asked. In fact I encourage everyone including you to download the data and see if you can replicate it yourself. I also invite anyone to present a model that is more skillful. Show me where the monthly data is and I’ll be more than happy to include it in my spreadsheet.
BTW…can you provide an objective definition for “failed” so that we can independently asses whether CMIP5 has “failed” or not? What I’m looking for a statement like “if a model has a warming trend that deviates by more than 0.02 C/decade over the period 1880-2020 then that model is said to have failed” or something like that.
“global mean temperature” is a meaningless number that has nothing to do with climate.
“global circulation model grids” are made-up, fake data.
Berkerley Earth ? 😀
No further comment 😀
If you have another global mean temperature timeseries you want me to use instead I’ll be happy to add it the graph.
JGU – Jan Esper

Source:
Just saw, it’s NH only
But you may take that
The Esper et al. 2012 publication is for northern Scandinavia only. It’s not even a NH reconstruction. However, this data and others provided by Esper et al. are included in the PAGES2K database which is consistent with BEST. And I’m aware of the Loehle publication. Everyone is. We’re also aware of the egregious mistakes that would have certainly been caught if a proper peer reviewed had occurred. These mistakes are so bad that Loehle thought “before present” in the context of proxy data meant 2000 when it actually means 1950. Many of his timeseries (there were only 18 of the several hundred that were actually available) are shifted forward by 50 years. You’ll have to manually add the 1C of warming that has occurred since 1950 as a result.
By using dozens of parameters, the output of climate models is made to somewhat look like the highly adjusted data.
I’m sure that impresses you.
Making models match reality is definitely the goal within all disciplines of science including climate science. I’d be disappointed if any scientist were not trying to do this.
Please see my post above for the reasons why they will never be successful.
Once again, bdgwx is not smart enough to realize that his response has nothing to do with my post.
Yes, model makers want to make their models as accurate as possible.
There are two ways to do this.
1)You can make your model accurately reflect the underlying physics.
2) You can throw in so many tunable parameters that you can make your model output anything you want to see.
Unfortunately the general circulation models all took the second route.
It is a combination of both. This is similar to the standard model of physics which has about 20 free parameters that must be tuned by experimentation. That doesn’t mean the standard model does not reflect the underlying physics. It is the same way with climate models. They are built to model the underlying physics.
More lies.
Any discussion of “saving lives” is propaganda. Lives are never saved, they are always temporary. Lives may be shortened or lengthened by our actions or by anything nature chooses to do at its own whim. “Saving lives” is a political crutch to stop anyone questioning the underlying facts and logic.
“Trust in meteorology has saved lives. The same is possible for climate science.”
Meteorologists are actually right once and a while. Climate science has yet to be right once. Why should I trust someone who has never once been right? And, if that wasn’t bad enough, they usually aren’t just wrong, they are wrong by a lot. While I don’t know how accurate the Farmers Almanac really is, I do know that it is far more accurate than the 30 year history of the IPCC. In reality, a random pick would be far more accurate because the climate change predictions are currently 0-for-everything.
There is a HUGE difference in the methods used to forecast weather over the next 5 to 10 days and the methods used to model climate change over the next 50 years or more.
The circulation of air and water (liquid or vapor) in the atmosphere is a turbulent and chaotic system, and weather forecasting computer models divide the atmosphere into a set of volume elements with set limits on their latitude, longitude, and altitude, then attempt to solve differential equations across these volume elements to predict the transfer of mass, momentum, and energy into and out of each cell, based on “initial conditions” observed shortly before the model is run.
Since the dimensions of each volume element are much smaller than the distance between weather stations, these “initial conditions” have to be interpolated, either linearly or using some other equation that roughly fits the intervening terrain. This introduces errors even at the starting time of the model. As the calculation progresses, the model only keeps track of “average” values of temperature, pressure, humidity, and wind speed in each volume element, and fluxes across the boundaries between volume elements. The actual profile within each volume cell is unknown.
A short-term weather-forecasting model can minimize the effects of accumulating errors by using relatively small volume elements, and relatively short time steps. If it is only desired to forecast the weather for a particular region, the domain of the model (volume modeled) can be restricted to an area of 1,000 to 2,000 km on a side, with the assumption that any air flow across the boundaries of this area would not affect the weather near the center of the region for several days.
When the actual weather is compared to that predicted by these short-term models, the models prove to be accurate to about 3 or 4 days, then the predictions start to diverge from reality about 5 to 6 days after the model is run, and predictions out to 10 days are hit-or-miss.
Long-term “climate models” use the same principles as the short-term weather models, but due to the necessity of modeling the entire globe over a period of years instead of days, the time steps need to be longer, and the volume elements need to be much larger, than in the short-term weather models, in order to achieve reasonable computation times.
These larger volume elements and longer time steps introduce much larger errors in the interpolation of initial conditions, and also more “smoothing” errors by trying to linearize a profile of temperature, pressure, humidity, and/or wind speed across a volume element 100 km or more wide. Also, air flowing from areas where there are few (if any) weather stations (such as oceans, the polar regions, or sparsely populated land areas such as deserts or Siberia) can introduce further errors into the predictions.
The other major problem with long-term climate models is assumptions made about radiation, either incoming radiation from the sun or infrared radiation that may be absorbed by water vapor or CO2. A radiation imbalance of 1 W/m2 can result in a temperature difference of about 0.2 degrees C in the equilibrium Stefan-Boltzmann temperature over the entire globe, but the overall effect of clouds on absorbed and reflected radiation is poorly understood, and radiation parameters (albedo, emissivity, etc.) input to climate models can have large errors.
In a short-term weather-forecasting model over 5 days or so, most people will not be upset if the actual daily maximum or minimum temperature is 0.2 C higher or lower than what was predicted, and might not even notice, so that such models are considered reasonably accurate.
I take the temperature from Davis-Monthan AFB – I assume that the instrument is somewhere near to the control tower, which is about two miles from my house.
Their reading is normally around one or two degrees (Fahrenheit) off from the forecast – and three to four off from my back yard.
EDIT: Maybe more… They show 68 degrees right now, but my back yard has 73 – five degree difference. (Mine is on the east side, so is typically warmer in the morning, and cooler in the later afternoon. They usually do agree just before dawn.)
Yes, all true, but the essential thing is, it’s chaotic. It doesn’t matter how fine your grid is, or short your time steps are, if a system is chaotic, it’s inherently unpredictable because the errors increase exponentially. OK, some systems are apparently predictable because of their dynamics, like planetary orbits, so they appear predictable for millions of years, but that is because human lifetimes are so short.
Meteorological models are updated with new data every 2 hours or so. Were they not, they’d quickly depart from meaningful predictions of weather.
Climate models are not and can not be updated with future climate data. Ever, obviously. There is no ground for comparison of climate model output with the output of a meteorological model. The extrapolations are entirely different animals.
Oliver Uberti is completely misguided in proposing their equation.
Climate models have no predictive value.
Pat, using your methodology what is the probability that CMIP5 would produce a trend that is within ±0.01 C/decade of the observed trend over a 140 year period?
Given the fact that the first 100 years of that 140 year period is little better than made up, not much.
Just a little bit ago, you were making the absurd claim that the data for 100 years ago was only accurate to 0.25 to 0.35C. Now you are claiming the models match that data to within 0.01C.
Just how badly do you want to discredit yourself?
“Just a little bit ago, you were making the absurd claim that the data for 100 years ago was only accurate to 0.25 to 0.35C”
I never said it was only accurate to 0.25 to 0.35C. I actually said this. On an annual basis it is ±0.15 and +0.10 C for periods around 1880 and 1940 respectively. Berkeley Earth shows about ±0.2 and ±0.1 C for 1880 and 1950 for monthly anomalies. For annual anomalies it is about ±0.12 and ±0.08 C respectively. See Lenssen et al. 2016 and Rhode et al. 2013 for details. 100 years ago was 1920 where published uncertainties are about ±0.15 and ±0.09 C for monthly and annual anomalies respectively. So it’s actually better than the 0.25 to 0.35C range you gave above.
“Now you are claiming the models match that data to within 0.01C.”
I never said models match that data to within 0.01C. I actually said this. The BEST trend is +0.087 C/decade vs the CMIP5 trend of +0.079 C/decade over the period 1880 to 2020. The RMSE on a monthly and 13-month centered mean basis is 0.165 C and 0.116 C respectively.
“On an annual basis it is ±0.15 and +0.10 C for periods around 1880 and 1940 respectively.”
They all ignore systematic measurement error. (890 kb pdf)
Also here.
These people – UKMet/CRU, GISS, Berkeley Earth — all behave as though they have never made a measurement or struggled with an instrument.
Even the resolution of the historical LiG thermometers is no better than ±0.25 C.
Their work is hopelessly incompetent.
“They all ignore systematic measurement error.“
No they don’t. You can argue that they don’t handle systematic error adequately. But you can’t argue that they ignore the issue because they don’t.
“Even the resolution of the historical LiG thermometers is no better than ±0.25 C.”
Yeah, at best. Fortunately this kind of error is mostly random and where it isn’t anomaly analysis will cancel it out. And while we’re on the topic did you know that LiG based stations Tmax are 0.57 C lower and Tmin 0.35 C higher as compared to MMTS stations (Hubbard 2006)? That is an example of real systematic bias that is not ignored. Other biases that are not ignored are the TOB carryover and TOB drift bias (Vose 2003).
“These people – UKMet/CRU, GISS, Berkeley Earth — all behave as though they have never made a measurement or struggled with an instrument.”
“Their work is hopelessly incompetent.”
You’re hubris here is astonishing.
BTW…I tested your hypothesis that the lower limit on global mean temperature uncertainty is ±0.46 C. I did this by comparing GISTEMP, BEST, HadCRUTv5, and ERA with each other from 1979 to present. If your hypothesis were correct then we should see a disagreement between them at σ = 0.65 C. Instead the actual disagreement is σ = 0.053 C implying the uncertainty of each is σ = 0.037 C which is consistent with published uncertainty estimates provided by these datasets and significantly less than σ = 0.46 C.
Hockey pucks—without this assumption, your entire house of cards falls.
Projection.
Wrong in every particular, bdgwx. None of them deal with systematic measurement error. They pass it off.
Instrumental resolution is not error. It is not random. It is the limit of detection sensitivity. Resolution is clearly a foreign concept to you.
Finally, your discussion about the ±0.46 uncertainty just shows you don’t know the difference between physical error and calibration uncertainty.
You display no knowledge at all of any of it.
“None of them deal with systematic measurement error.”
Lenssen 2019, Rhode 2013, Quayle 1991, Hubbard 2006, Vose 2003, Menne 2009, etc.
“Instrumental resolution is not error.”
Yes it is. If the instrument is only capable of reporting to 1 decimal place then the error is rectangular at ±0.5. That is in addition to any accuracy and precision error that it may have as well.
“It is not random”
Yes it is. Well, unless you want to argue that the instrument prefers rounding in one direction or another which would be very unlikely.
“Finally, your discussion about the ±0.46 uncertainty just shows you don’t know the difference between physical error and calibration uncertainty.”
I do know the difference between physical error and calibration uncertainty. And I know that if two measurements of the same thing each have ±0.46 C (1σ) of uncertainty then the differences between those two measurements will distribute normally with σ = 0.65 C. And I know that the real differences between GISTEMP, BEST, HadCRUTv5, and ERA were actually σ = 0.053 C implying the uncertainty of each is σ = 0.037 C. This is despite the fact that each group uses a different technique consuming different subsets of available data.
This…
“If the instrument is only capable of reporting to 1 decimal place then the error is rectangular at ±0.5.”
…should have been…
“If the instrument is only capable of reporting with 0 decimal places then the error is rectangular at ±0.5.”
AGAIN—you conflate uncertainty with error.
Good catch. I should have used the word uncertainty in my post at 7:55am. Note that error is the difference between a specific measurement and the true value while uncertainty is the range in which the errors from repetitive measurements fall into. Because I was referring to the range in which the error could be I should have used the word uncertainty. Let’s try this one more time.
“If the instrument is only capable of reporting with 0 decimal places then the uncertainty is rectangular at ±0.5.”
“uncertainty is the range in which the errors from repetitive measurements fall into”
You just won’t learn the difference will you. Every time you make a repeated measurement of a single thing, uncertainty occurs. IT HAPPENS ON EACH MEASUREMENT. You can’t eliminate it because it is not random.
Uncertainty means you don’t know and can never know what the true measurement should be each and every time you make a measurement. It is always there and you simply can’t account for it with a statistical analysis. Think of it this way. Statistically, any value within the uncertainty interval is just as likely as any other value, but you can never ever determine what the probability of any individual point in that interval is since they all have a similar probability.
Wrong in every particular again, bdgwx.
Lessen, et al., 2019 describe systematic error as “due to nonclimatic sources. Thermometer exposure change bias … Urban biases … due the local warming effect [and] incomplete spatial and temporal coverage.“
Not word one about systematic measurement error due to irradiance and wind-speed effects. These have by far the largest impact on station calibration uncertainty.
Next, resolution is instrumental sensitivity. Any displacement below the resolution limit is physically meaningless. Resolution is not error. A resolution limit is not random error, and it has nothing to do with round-off.
You wrote, “I know that if two measurements of the same thing each have ±0.46 C (1σ) of uncertainty then the differences between those two measurements will distribute normally with σ = 0.65 C.“
You don’t know that because it’s wrong.
Each measurement may take an identical value or some disparate value. However each value will have a calibration uncertainty of ±0.46 C. Their sum or difference will have an uncertainty of ±0.65 C.
Systematic measurement errors due to uncontrolled environmental variables have no known distribution.
Your “σ = 0.053 C … σ = 0.037 C” are again to 3 significant figures past the decimal; again claiming milliKelvin accuracy.
It’s quite clear you don’t know the subject, bdgwx.
“Not word one about systematic measurement error due to irradiance and wind-speed effects. These have by far the largest impact on station calibration uncertainty.”
Great. Provide a link a publication discussing this, the magnitude of its effect, and how much systematic error it causes in a global mean temperature measurement, how much it biases the temperature trend. I would be more than happy to review it. And yes, I’ve already read the Hubbard 2003 and Hubbard 2006 publications. Note that the biases Hubbard discusses were addressed in datasets in a targeted manner early on and now more generally via pairwise homogenization by most datasets today.
“Next, resolution is instrumental sensitivity. Any displacement below the resolution limit is physically meaningless. Resolution is not error. A resolution limit is not random error, and it has nothing to do with round-off.”
Of course that is error. If a thermometer can only respond to a change in temperature of 0.5 C then it might read 15 C even though the true value was 15.4 C. That is an error of -0.4 C. And if you take repetitive measurements with it you’ll find that these errors distribute randomly in rectangular fashion.
“Systematic measurement errors due to uncontrolled environmental variables have no known distribution.”
Then what does your ±0.46 value embody exactly and why did you use 1σ to describe it?
“Your “σ = 0.053 C … σ = 0.037 C” are again to 3 significant figures past the decimal; again claiming milliKelvin accuracy.”
No. I’m not. I’m claiming the differences of measured monthly global mean temperature anomalies falls into a normal distribution with σ = 0.053 C (Excel reports 0.052779447215166) which implies that the individual uncertainty is σ = 0.037 C ( Excel reports 0.037320705033122). I took the liberty of reducing the IEEE 754 values to 3 sf as compromise between readability and not being accused of improperly rounding or truncating to make the values appear higher/lower than they actually are. None of this means that I’m claiming millikelvin accuracy. In fact, the data is saying that it cannot be as low as 0.001 because it is actually 0.037 which is consistent with the various uncertainty estimates provided by the community. I’ll repeat 0.037 K does NOT mean millikelvin uncertainty.
And ultimately why do GISTEMP, BEST, HadCRUTv5, and ERA5 differ by only σ = 0.053 C if the claimed uncertainty is σ = 0.46 C?
“Great. Provide a link a publication discussing this…”
Here, here, and here.
See also Negligence, Non-Science and Consensus Climatology.
“I would be more than happy to review it.” A useless enterprise, given the lack of understanding you display.
+++++++
“Note that the biases Hubbard discusses were addressed… etc.”
You’re just talking through your hat. Again.
Systematic measurement error due to uncontrolled environmental variables cannot be addressed except by inclusion as calibration uncertainty. Included to qualify every reported temperature.
+++++++
If a thermometer has a resolution of ±0.5 C, a temperature of 15.4 C cannot be read off the instrument. Temperatures less than the limit of resolution are invisible to the instrument. You show no understanding of the concept.
+++++++
“σ = 0.053 C” you’re claiming 3 significant figures again, equivalent to an accuracy of 0.001 C. And you neither see it nor, apparently, understand it.
+++++++
“if the claimed uncertainty is σ = 0.46 C?”
Uncertainty is not error. It does not describe the divergence of individual measurements, or of separate trends. It describes the reliability of the measurement.
You’ve not been correct on any point, bdgwx. Not one.
“Here, here, and here.”
The first two are your own publications which have not been received well by your peers and which I’ve shown cannot be correct. I cannot see that the third is even relevant.
“If a thermometer has a resolution of ±0.5 C, a temperature of 15.4 C cannot be read off the instrument.”
That’s my point. If the temperature displays 15 C then the error is – 0.4 C.
“σ = 0.053 C” you’re claiming 3 significant figures again, equivalent to an accuracy of 0.001 C.”
I don’t know how to make this anymore clear. σ = 0.053 C is the result from Excel’s STDEV.P function. It is the standard deviation of the differences between monthly global mean temperature measurements. It is NOT a claim of accuracy or uncertainty of 0.001 C. In fact, it is the opposite. It is saying the uncertainty cannot be as low as 0.001 C. And it doesn’t matter if I report the value as 0.05 or 0.053, 0.0528, or 0.052779447215166. All of those numbers are significantly greater than 0.001.
You’re an idiot who cannot be taught.
If you’re trying to “teach” me that 0.052779447215166 is equivalent to or even on par with 0.001 then I’m going to have to pass.
“I’ve shown cannot be correct.” Big talk. Where?
And I claim in utter confidence that you’re blowing smoke.
“If the temperature displays 15 C then the error is – 0.4 C.”
How would you know in practice? None of the historical LiG thermometers or MMTS sensors included a high-accuracy reference thermometer.
All one has in any case is a previously determined calibration uncertainty, which adds as the RMS in any average or anomaly.
“I don’t know how to make this anymore clear. σ = 0.053 C is the result from Excel’s STDEV.P function.”
An incredible display of utter ignorance. Any report of an uncertainaty to ±0.001 C is a claim of 0.001 C accuracy.
First year college, any physical science. Any high school AP science class.
You’ve been wrong in every instance, bdgwx. You and bigoilbob are such a pair.
“Where?”
I downloaded the monthly data for HadCRUTv5, GISTEMPv4, BEST, and ERA. I compared the monthly data from each dataset with the other datasets from 1979/01 to 2021/08 all 3072 combinations. The differences between the values fell into a normal distribution with σ = 0.052779447215166 C. This implies the uncertainty on each is σ = 0.037320705033122 C. That is significantly lower than the claimed uncertainty of σ = 0.46 C.
“How would you know in practice? None of the historical LiG thermometers or MMTS sensors included a high-accuracy reference thermometer.”
I’m not certain what the error is for each measurement. That’s why we call it uncertainty. Remember, my response is to your statement “Resolution is not error.” My point is not that we know what the error is for every measurement, but that we know that resolution limitations cause error.
“An incredible display of utter ignorance. Any report of an uncertainaty to ±0.001 C is a claim of 0.001 C accuracy.”
Nobody is reporting an uncertainty of ±0.001 C and certainly not me. In fact my analysis suggests the uncertainty is much higher at ±0.037 C. Then you, Carlo Monte, and Gorman criticized my choice of 3 sf for the display of the uncertainty totally missing the essence of the meaning of the value. So now I’m reporting the full IEEE 754 form as σ = 0.037320705033122 C allowing you guys do whatever rounding you want and hopefully avoiding the whole significant figures discussion. Just know that no reasonable significant digit or round rule will result in ±0.037320705033122 C being equivalent to or even on par with ±0.001 C.
“First year college, any physical science. Any high school AP science class.”
It’s more like middle school math. ±0.037320705033122 C is literally an order of magnitude greater than ±0.001 C.
“That is significantly lower than the claimed uncertainty of σ = 0.46 C.”
Uncertainty is not error. Nor is it the difference between alternatively processed sets of identical raw data. You’re making the same mistake over, and yet over again.
“The differences between the values fell into a normal distribution with σ = 0.052779447215166 C.” A display of utter ignorance concerning measurements.
“My point is not that we know what the error is for every measurement, but that we know that resolution limitations cause error.”
Your point is doubly wrong. First, resolution limits mean physical differences below the resolution limit are invisible. That’s not error.
Second, systematic measurement error from uncontrolled environmental variables is not random and unknown for each measurement. However, a calibration experiment can provide a standard uncertainty for each and every measurement. That uncertainty is estimated as ±0.46 C.
Every single LiG and MMTS surface temperature measurement has that uncertainty appended to it.
“Nobody is reporting an uncertainty of ±0.001 C and certainly not me.” You were. And in your post here you’re reporting an accuracy of one part in 10^15.
“It’s more like middle school math. ±0.037320705033122 C is literally an order of magnitude greater than ±0.001 C.”
Really, it’s too much, bgdwx. Your 16-digit number conveys knowledge out to 15 places past the decimal. That’s the issue. Not the magnitude of the number.
You’re arguing from ignorance and dismissing all the objections from people whose career involves working with measurement data.
Pat, I want you to ponder this question as well. Given an annual global mean temperature uncertainty of σ = 0.46 C what would the uncertainty of each grid cell in the HadCRUTv5 grid have to be?
The representative uncertainty in the temperature average within an arbitrary grid-cell depends on the number of meteorological stations within it.
Any given station average has an associated representative ±0.46 C uncertainty. The grid-cell average uncertainty is the rms of the included station uncertainties.
As the number of station averages going into a grid-cell average or global average becomes large, the representative rms uncertainty tends to ±0.46 C.
Note this quote from the paper (pdf): “The ideal resolutions of Figure 1 and Table 2 thus provide realistic lower-limits for the air temperature uncertainty in each annual anomaly of each of the surface climate stations used in a global air temperature average.”
The uncertainty per grid-cell should be obvious.
Some grid cells have zero meteorological stations, however.
HadCRUT uses a 5×5 lat/lon grid which means there are 2592 cells. The global mean temperature value is sum(Tn/N) where Tn is the value of one grid cell and N is the number of grid cells. In other words, it is the trivial average of the grid. So if the uncertainty of sum(Tn/N) is ±0.46 C then what is the uncertainty of Tn?