Essay by Eric Worrall
Taking “model output is data” to the next level…
AI reveals hidden climate extremes in Europe
ByAndrei Ionescu
Earth.com staff writer…
Traditionally, climate scientists have relied on statistical methods to interpret these datasets, but a recent breakthrough demonstrates the power of artificial intelligence (AI) to revolutionize this process.
Previously unrecorded climate extremes
A team led by Étienne Plésiat of the German Climate Computing Center in Hamburg, alongside colleagues from the UK and Spain, applied AI to reconstruct European climate extremes.
The research not only confirmed known climate trends but also revealed previously unrecorded extreme events.
…
Using historical simulations from the CMIP6 archive (Coupled Model Intercomparison Project), the team trained CRAI to reconstruct past climate data.
The experts validated their results using standard metrics such as root mean square error and Spearman’s rank-order correlation coefficient, which measure accuracy and association between variables.
Read more: https://www.earth.com/news/ai-reveals-hidden-climate-extremes-in-europe/
…
The only thing which is real about using generative AI to try to fill in the gaps is the hallucinations.
What are AI hallucinations?
AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
Read more: https://www.ibm.com/topics/ai-hallucinations
I am an AI enthusiast, I believe AI is contributing and will continue to contribute greatly to the advancement of mankind. But you have to rigorously test the output. Comparing the AI output to a flawed model to see if it fits in the band of plausibility is not what I call testing.
Climate scientists have been repeatedly criticised for treating their model output as data. Using a tool which is known for its tendency to produce false or misleading data, to generate climate “records” which cannot be properly checked in my opinion is an exercise in scientific fantasy – a complete waste of time and money.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“The models are robust.” Because they give us the fake data we want.
And BTW I wish they would use A.I. instead of AI, you know, like the Man from U.N.C.L.E.
ha, ha… love that show- he had his phone in the heel of his shoe
You’re thinking of Get Smart.
Right! now it’s coming to me- both actors lived to a ripe old age. Good for them. I enjoyed that show too- as a child. My late wife said when she watched it she had a crush on the Russian guy, who of course wasn’t a Russian.
You mean Illya Kuryakin, played by David McCallum. (Scottish)
David McCallum had several movie roles, eg Colditz, Great Escape
Also played “Ducky” in NCIS until his death in late 2023 at the age of 90.
Missed it by “That Much”.
And loving it!
They should use “AI”, in quotes, because there’s no intelligence involved.
Artificial Ignorance?
I keep wondering who this guy called Al is? Does Paul Simon know?
Really? Why not just use the written record? Why make up fake climate “events”? Oh, yeah, because they are liars.
Fraud is their trade.
In UK Fraud Act 2006 section 2
Fraud by false representation
(1)
A person is in breach of this section if he—
(a)
dishonestly makes a false representation, and
(b)
intends, by making the representation—
(i)
to make a gain for himself or another, or
(ii)
to cause loss to another or to expose another to a risk of loss.
After looking at “climate attribution science”..
…it sounds like you are listing the “attributes™” of “climate scientists™” 😉
And most politicians.
Down with Mother Nature!
They are not just out to lunch, they are bathing in a sea of ignorance and/or deception. Artificial “intelligence” is not intelligent – it is simply high end processing of high volume data. Models are not reality – they are mathematical representations of hypotheses. Combing AI and unvalidated models is to science what abstract art is to reality. The fact a large part of the “climate crisis” hoard is OK with this deception is proof they have no interest in science and truth and no respect for their audience. It is self-serving propaganda to feather their nests and add power to their positions. This type of “research” is right up their with reality TV shows on paranormal events and exorcisms.
“… it is simply high end processing of high volume data.” But in the development of the A.I., it was fed biased “data” from which to learn. Not particularly useful. And even Dr Gavin Schmidt admitted that CMIP6 ran hot.
I love those shows. Especially about ancient astronauts. Or Jesus was a spaceman.
Models are simply computer opinions and worth as much.
“Artificial “intelligence” is not intelligent – it is simply high end processing of high volume data.”
Exactly.
According to ChatGTP:
AI is advanced software implementing weighted decision trees with some adaptability and all hidden behind an exceptionally good language interface.
AI is not intelligent. It lacks consciousness.
I asked. Those were the answers.
ChatGPt falsely accused law professor Jonathan Turley of sexual assault.
https://nypost.com/2023/04/07/chatgpt-falsely-accuses-law-professor-of-sexual-assault/
How much confidence can we have in systems that would do something like this?
Artificial Intelligence needs to be fact-checked. It is not the last word on anything.
How much confidence can we have
After my recent experience, none.
It is not the last word on anything.
Unfortunately, most will take it as such.
The worst possible application of so-called AI. Weather events either occurred and were recorded, or they did not occur. There is no possibility of AI “reveal[ing] previously unrecorded extreme events.”
The idea that there exist magical statistical tests that can “validate….results” is one of the major problems with so many of our present scientific fields. Medical research is rife with nonsensical results “validated by statistical tests”.
“Weather events either occurred and were recorded, or they did not occur. “
Surely there is the possibility of events occurring and not being recorded. Or are you
claiming that if a tree falls in a wood and nobody is there to hear it then it doesn’t make
a sound?
And if it wasn’t recorded- how would you know it actually occurred? You’d trust this line of thinking? Comparing a severe weather event with the sound of a tree falling is absurd.
You are a little out of touch. The tree falling with no one around to hear it is a metaphysical expression/question that has been in use a long time. It, in fact, applies to any event. If no one heard it, or no one observed some event that left no evidence humans are aware of, did it really occur? How could you prove, after the event, to use the falling tree example, that there was a sound when nothing was observed by anyone. You may think there must have been a sound because there is a sound when someone is there to hear it but believing without evidence is no better than Mann making predictions of doom or any of the of the other CAGW claptrap.
In fact “sound” only makes sense if something capable of “hearing” is involved. Otherwise the tree falling only produces a pressure wave. You are correct about this being metaphysical.
Sound is vibrations in air. It happens whether there is anyone to hear them.
Sound implies hearing. If nothing is there to hear it then there is no sound. There *is* a pressure wave but it is not sound.
Sound is a vibration in a medium, no subject capable of hearing is involved in the definition.
“Sound is a vibration that propagates as an acoustic wave through a transmission medium such as a gas, liquid or solid.”
if you are going to use wikipedia definitions then quote everything wikipedia says.
“Sound can also be viewed as an excitation of the hearing mechanism that results in the perception of sound. In this case, sound is a sensation.”
“A distinct use of the term sound from its use in physics is that in physiology and psychology, where the term refers to the subject of perception by the brain. The field of psychoacoustics is dedicated to such studies. Webster’s dictionary defined sound as: “1. The sensation of hearing, that which is heard; specif.: a. Psychophysics. Sensation due to stimulation of the auditory nerves and auditory centers of the brain, usually by vibrations transmitted in a material medium, commonly air, affecting the organ of hearing. b. Physics. Vibrational energy which occasions such a sensation. Sound is propagated by progressive longitudinal vibratory disturbances (sound waves).”[17] This means that the correct response to the question: “if a tree falls in a forest and no one is around to hear it, does it make a sound?” is “yes”, and “no”, dependent on whether being answered using the physical, or the psychophysical definition, respectively.”
From Merriam-Websters:
“sound
1 of 7
noun (1)ˈsau̇nd
Synonyms of sound
1
a
: a particular auditory impression : tone
b
: the sensation perceived by the sense of hearing
c
: mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (such as air) and is the objective cause of hearing”
“Sound can also be viewed as an excitation of the hearing mechanism….”
The key word here is “also”. So, according to Wikipedia – and in my opinion, common sense – a tree does make a sound if it falls unobserved.
Chris
I quoted the primary definition.
Sensation of a vibration is a possible interpretation of the word, but not required. Just because some people are deaf, it does not mean the world around them is not full of sounds. Similarly, absence of hearing subjects at a given location does not cancel sounds.
I am familiar with this expression and it meaning, but only would like to add that in case of a falling tree, we can “predict” by extrapolation in the past that there was a sound just as well that we can forward predict that any future falling tree will produce a sound. In fact, observing that any falling tree has produced a sound does not guarantee that any future falling tree will produce a sound. It is just a model of cause and effect that has worked so far every time. If we are OK using it forward, it is as likely that past falling trees were producing sounds.
The ‘evidence’ of the tree falling is the fallen tree. It is safe to say such an event creates sound.
Based on physic in our current frame of reference, yes, but more accurate to call it an acoustic wave.
.
If that same tree fell in a different frame of reference, say one with no air, there would be no acoustic wave.
The nuanced difference between sound and acoustic wave is sound is interpreted based on the acoustic wave. With no sensory apparatus available, there is no sound.
If a tree falls in a wood and you’re wife isn’t there to blame you is it still your fault?
If I give you a daily mid-range temperature can you tell me what T-max was? If I tell you that the temp at 0000UTC was 1C can you tell me what the temp at 0001UTC was? If the temp at 0001UTC was not recorded then what crystal ball would you use to discern it? Does AI have a better crystal ball than ys mere humans?
Data from not existing stations are the best to use in models or from AI 😀
No need to hide the paper trail!
You could take your little pills and “imagine” it happening !
About your level of science.
Now I know you’re just a troll. Yes, events can happen that aren’t seen or experienced and recorded by humans. The pre-satellite cyclone record is a case in point. As is everything before temperature records. The problem is, you know nothing whatsoever about those events. Any details would be completely imaginary. Why on Earth would you want those in the record??
Jeff, Kip is claiming that events that were not recorded did not happen. How else do you interpret his sentence:
“Weather events either occurred and were recorded, or they did not occur. “
I completely agree with you that events can happen that aren’t seen. It is Kip who things that that isn’t possible.
But that is no excuse to just MAKE STUFF UP !!
You are misinterpreting what Kip is saying. If you don’t have documentation of an event then saying it happened is just making data up out of thin air. What’s to prevent making up whatever data you need to support your claims? I could easily say that 3000 tornadoes happened in Kansas this past summer even though they weren’t all recorded. Would you believe that claim?
Exactly.
Making a snippet is a form of cherry picking.
Weather events either occurred and were recorded, or they did not occur. There is no possibility of AI “reveal[ing] previously unrecorded extreme events.”
Context is important. The implied context is the data records.
Because it ” adds value” to the record, according to Phil Jones, who did a LOT of adding value to the temperature records.
Nice strawman to knock down, but not related to the subject being discussed.
Is not the opposite also true, if you don’t hear it, it did not fall?
if you don’t hear it, it did not fall?
If there is nobody to observe it, it has neither fallen or not fallen. It occupies an indeterminate state until it is observed. Like the cat.
Sounds a lot like quantum waves. Did it collapse and kill the cat or not?
Rereading your post. You were talking QM waves.
If a man, alone in a forest speaks with no woman around, is he still wrong?
/humor
Always.
If he’d asked for directions, he wouldn’t be lost in the forest. 😎
(I know. You didn’t say he “lost”. But an AI told me he was.)
Also, if he’s alone in the forest and there’s no one to hear him, did he really speak?
If a tree falls in the woods, would the Enviros allow anyone to use it? 😎
How else do you think scientific experiments and medical trials should be validated, if they’re not going to use statistical tests?
I’m no mathematician or scientist- but I suspect the best statistics can do is offer probabilities regarding experiments and medical trials. Validation implies certainty not probabilities. A lot of meds turn out to be not so great- though they were validated. All it takes is a little bit of skepticism to not believe with certainty what somebody says regarding any validation of anything.
Yes, statistics is in effect applied probability theory.
It can only ever give you a probability, what it tells you is what the probability is, so with 99% confidence there’s a 1% chance of the result being a fluke, with a five-sigma probability there’s a 0.00003% chance of a fluke result.
Ultimately, it’s for the user to decide if the confidence level is sufficient.
Also, another key factor is repeatability, if multiple experiments keep giving the same result, that’s a sign that the original experiment was right.
Edit:
Finally, what do you think they should do instead?
Not sure- I just think the use of the word “validated” is overused. It implies certainty. I doubt any test on meds is at the 99% confidence level. Research in a lab- physics and chemistry- are more likely to have useful statistics, especially when repeated. But, meds and climate “science” have too many unknowns. Like I said, I’m not a scientist- just offering my opinion and the fact that skepticism of many topics seems wise. I’m skeptical of all religion, almost all politics, and much of science.
We do software verification and validation.
The functions implemented either perform as specified or do not.
The input ranges are tightly specified. The expected outputs equally specified.
Validation is highly overused. Validation is not merely confidence, it is an exacting certainty.
Validation has to be done using comparisons with reality.
Statistical descriptors can only describe the data in your data set. It can NOT tell you if that data is accurate or if it matches reality. It’s the entire concept behind the adage that “correlation is not causation”
See post above.
If the data is inaccurate, that’s not a problem with statistics, that’s a problem with the data collection itself.
Finally, how would you validate a medicine without statistical methods?
Maybe another word should be used other than “validate”- sounds too definitive. Some meds work fine for some people but can injure others. So you need to be careful how you define “validate”.
Yep.
A certain safe and effective jab comes to mind.
Fauci was forced to admit for a court case that none of the 70-odd childhood vaccines had any sort of safety validations.
See my response above.
Statistics can’t tell you how accurate a measured value is. It can only be used to propagate measurement uncertainty associated with the measurements in the data set.
Again, statistical methods can only tell you about the data. Statistics can’t generate data. Statistics can’t make data more accurate.
The usefulness of medicine, in the final analysis, can only be validated by accurate measurement of results in the real world. That means that the *measurand” itself needs to be fully specified as well as the measurement protocol. Give a group of patients an aspirin and how do you measure its effects in order to validate expectations for it? What do you measure and how do you measure it?
It’s why an experimental drug given to a strain of mice at Stanford and a strain of mice at Boston General may give vastly different results. Even minor differences in genetics can react in different ways. How do you use those different results to “validate” anything?
Why do some people respond to placebo’s and some not. Why do some people respond to a new drug and others don’t?
I’m not a biologist, viralogist, immunologist, or anything like that. But I know that an electronic circuit made from parts manufactured in one place can give different results than a circuit made from parts manufactured elsewhere. I can “validate” the results from the first circuit for noise tolerance, drift, etc. to see if they match my theoretical design and be totally surprised by the results from the second circuit.
It’s what measurement uncertainty is all about. And in medicine the “uncertainty” typically given is actually sampling error and not measurement uncertainty. How many experimental medicines include a measurement uncertainty budget in the analysis done for the medicine? I’ve never seen one but then I don’t see a lot of those. Do you?
I do not validate my circuits. I verify. The results match simulations or other analysis or not.
I do validate the functions of the circuits. They either perform as specified or do not.
Notable that nobody seems to have an alternative to “statistics”.
The problem with statistics is not statistics, it’s users who don’t know what they are or how to use and interpret them.
“Statistics uber alles!”
There is an alternative to statistics. Extensive testing with instrumentation of known tolerances and errors.
Not going to happen in the climate arena!
Bring on the gladiators.
One never validates a medicine. One publish the efficacy and the probabilities of various associated risks.
Efficacy and risks are determined by applying statistics to live test results.
You need to go study what statistics are used for in medicine. Statistical tests do NOT validate anything in a medical trial. Validation is done by clinical trials to verify the ability of a medicine, procedure, or device to do what it is designed to do. Statistics can be used to analyze the data from clinical trials to either reject a null hypothesis or to determine the efficacy.
In other words, if I design a procedure to accomplish a certain outcome, statistics can not tell me if the outcome occurred or not, that is, validate the outcome. Statistics can provide me with a probability distribution that can be used to determine if the procedure is worth while. But, statistics can not tell me if the procedure worked or not, i.e., validate the outcome.
What you are basically describing is p-hacking where data is manipulated to show outcomes are better than expected. Why do you think the medical community and much of science has a replication crisis on their hands? Statistics are being used for purposes that are not proper.
Here are two links that might explain better.
What is P Hacking: Methods & Best Practices – Statistics By Jim
What is the Relationship Between the Reproducibility of Experimental Results and P Values? – Statistics By Jim
Spot on.
Statistical tests are mathematical.
Scientific experiments and medical trials have actual measurements. Applying statistics to actual test results is not a statistical test.
The devil is in the nuances.
I agree Kip. However, if they want to convince anyone that they can do this reliably, then they can withhold known extreme events from the training data and see if they’re predicted against the testing data. However, the problem with this particular study is their training data isn’t real. They trained their model on a data from a model. It’s generally much easier to train a neural network, or decision tree to match data generated by a model, than it is real data.
“Using historical simulations from the CMIP6 archive (Coupled Model Intercomparison Project), the team trained CRAI to reconstruct past climate data.”
What is wrong with training their neural net on data from a model? What is important is that they tested it on real data. In their paper they state that:
“To evaluate these models, we use data from three types of datasets that were not included in the model training: a simulation dataset (from CMIP6 models), a reanalysis dataset (ERA5), and an observational dataset (HadEX-CAM). ”
And if you look at table 1 in supplementary information you can see why they
needed to train their models on CMIP6. They used 61560 samples to train their data but the observational dataset only had 1416 samples which is over a factor of 10 smaller. Since it takes a huge amount of data to train their neural net they needed to use model data. They then tested it against real data and it performed much better than other ways to interpolate the temperature such as
“inverse distance weighting” or the Kriging approach.
Still just Fake Data, de rigueur for climate pseudoscience.
Using JUNK models and FAKE data to PRETEND that you can INVENT things that there is NO OTHER EVIDENCE for.
That is the “climate science” you love and believe.
But again you need to read the article. They are saying that their infilling method reveals temperature extremes that correlate with other events. For example:
“In fact, the
effects of the heatwave on the population are even apparent in
demographic data, especially in the death rate. For instance, a demo-
graphic study estimates this heatwave to have caused more than
40,000 deaths in France, mostly infants and seniors. While not entirely
correlated, it is possible to use these data as a proxy to uncover
indirect evidence of a large number of warm days and nights. The
French map of increased senior deaths for 1911 obtained from and
shown in Supplementary Figs. S14, S15 exhibits strong similarities with
the high TX90p and TN90p values predicted by the CRAI models in
this region. ”
So there is evidence for the extreme events that they suggested happened.
I have read it.
It is GIBBERISH, and the creation of TOTALLY FAKE DATA. !
It is only evidence of the sheer fabrication of the whole junk non-science.
“While not entirely
correlated, it is possible to use these data as a proxy to uncover
indirect evidence of a large number of warm days and nights.”
Nice word salad.
Good enough to make K.H. blush?
She must be taking notes r/n.
I asw a funny video of Kamala yesterday on Facebook.
Here is a Youtube video of her speech upon returning from Hawaii after losing the election.
Many people thought she looked drunk while giving the speech.
https://www.youtube.com/watch?v=2ADzV0JVYF4
The video clip I saw had her holding a liquor bottle in her right hand while she was talking.
It looked just like a drunk talking. And she made about as much sense as a drunk would make.
While not entirely correlated means it was not correlated.
So they are comparing model hypotheticals to other model hypotheticals with no calibration involved.
In the model data. There is no escaping it, the “hidden” climate change impacts are derived from model data, not actual measured climate change and that makes them non-scientific.
I’ve been looking at climate change for a long time and can remember when the “definitive” claims that climate change was primarily caused by anthropogenic CO2 were based on model results because “what else could it be?” and at that time, few questioned their validity.
Since then we’ve “moved on” and as far as the public is concerned its a given that climate change is primarily caused by the CO2 despite the fact the models are increasingly under fire.
Izaak, fundamentally the climate datasets are too small for machine learning algorithms. Validation datasets are typically around 1/3 the size of the training data, if not larger. Also, one has to wonder how much of the validation dataset was used in creating the CMIP6 models to begin with. I don’t know, but I do know that when you overfit the data you get really good performance.
Means the model data has a much larger influence than observation data. So anything the AI responds to is actually questioning model data.
Stop. Pause. Think.
The observational dataset had 1416 samples, but the models expanded that to 61560 samples.
How were those model hypotheticals verified as accurate?
It the data was wrong, a hypothetical case, then the AI was mis trained and anything it spits out is irrelevant.
Models do not output data.
Models, at best, output hypotheticals.
“Weather events either occurred and were recorded, or they did not occur.”
I agree with Izaak that this either/or is not reality. But he ignores the real claim, that “AI” can discover these unrecorded events, and somehow be useful additions to the record.
Go back and read the second sentence.
I did. At best, Kip’s statement is poorly worded or incomplete.
Kip,
When photography went from film to digital, I had some minor correspondence with medical X-ray imaging people about the potential for digital image manipulation to generate false features and so lead to surgery that was not needed. We did not reach any conclusions about procedures to minimise this risk by (for example), approving some image enhancement methods (for clarity) and disapproving others (for risk of hallucinations).
This article seems to be about a similar argument in some ways. The danger, it seems to me, is (a) ignorance, allowing invalid procedures to be adopted instead of flagged as dangerous and (b) the further danger from use of beliefs instead of available data. Both of these reflect a lowering of educational standards in research generally, so that important decisions are being made by people unqualified to make them and accepted by an audience unqualified to judge. There is much work to be done to regain better scientific purity by, for example, minimising beliefs and other social inputs such as political preferences, bias in funding of research and lack of appreciation of the role of uncertainty in science. I have seen science standards fall in the last 20 years and I have seen some really bad outcomes.
Please don’t assume that science is self-healing and that these bad times will pass. They won’t, because there is a lot of money to be made from wilful interference. We need ongoing work from those high quality remaining scientists who have seen better ways.
Geoff S
Roger Pielke Jnr has posted a Substack article today about rampant bias by Professors in universities.
Covers similar observations to your beliefs and other social inputs such as political preferences, bias in funding of research and lack of appreciation of the role of uncertainty in science
Spot on.
AI programs reflect their programmers and training data, as revealed by Google’s response to “show me 1944 German troops” that showed Asians and Blacks in Waffen SS uniforms.
Oh, heck- and here I thought those images were real. /s
Yeah, from a gay prono movie?
This would be expected in a typical Netflix serial.
Not AI, but I’ve run a number of computer simulations about WW2.
In one, Japan conquered the USA and kicked out the Germans.
In another, Germany conquered the USA and kicked out the Japanese.
In another the Allies conquered Japan and kicked out the Russians.
In another the Allies conquered Germany and kicked out the Russians.
I never tried a simulation where the Japanese fleet was met by 10 or so Montana Class battleships at Pearl Harbor. All it would take would be a bit of “infilling”.
I think the local weather guys are using predictions as the actual weather measurements. They are often shocked when it is cold outside or even simply nice. Their predictions a week out are certain doom but when a week goes by nothing happens. I’m waiting to hear “The predicted highs in 2020 were 10% lower than the temperature today.”
Modern weather forecasters (and weather TV channels!) compete with cell phone aps. Their only hope is casting and wardrobe.
If they did more of these, they’d have more viewers.
https://country1025.com/listicle/15-of-the-most-hilarious-tv-weather-bloopers-youve-ever-seen/
10%? This only makes sense on absolute temp scale. If predicted value was 300K, 10% lower is 270K. In C or F any talk of % is absolutely meaningless.
You definitely understand the absurdity.
With AI being a deferral to an authority, it is ripe for these problems as well as circular logic.
Even with a gate keeper of correctness, AI is quite likely to finish only slightly better than Wikipedia for answers.
But SCIENCE!!!
It is likely to finish worse. It will regurgitate Wikipedia garbage and add its own.
Wikipedia is cooked by what and who they will accept as sources. The New York Times, which was all in on Russiagate, is acceptable. The Federalist, which debunked Russiagate, among other “scandals”, is not.
Wiki has corrected the page on Tyndall.
It formerly claimed his experiments involved spectroscopy.
As such one must approach anything in Wiki as no better than a moving target.
“a complete waste of time and money.”
Unfortunately, they have a lot of our money to waste.
Unfortunately, it is not their money.
I just watched a video that supposedly had 2 opposing AI teams discussing climate change. It was really really bad with multiple caveats essential in the discussion. It focused primarily on Co2. The multitude of assumptions stated as facts was the main problem. It was like current first class highschool stuff. It ended w both teans agreeing carbon capture was a reasonable agreed thing to do. Im thinking how easy it would be to bias this sort of thing upfront and to claim neutrality and make it seem that independent unbiased agents have come to a reasonable set of conclusions.
The origins of the climate studies sponsored by the UN was to learn and understand the climate, both natural and anthropogenic.
That soon changed to study the effects of CO2 on the climate.
Not too long after, a Clinton-Gore rep to the IPCC changed the summary from “no discernable human signature” to definitely blah blah and many of the scientists were livid over the changes.
So IPCC implemented some rules.
One is, if the summary disagrees with the science reports, the science reports have to be revised.
“The research not only confirmed known climate trends but also revealed previously unrecorded extreme events.
Yuh, extreme events that nobody noticed. 🙂
And somehow those extremes did not show up in the bristle cones.
“…. a complete waste of time and money”
Along with being evil.
“root mean square error and Spearman’s rank-order correlation coefficient”
Sciency stuff -horror! Tools from Statistics 101.
Spearman’s name needs to be erased to support equal outcome goals.
“Spearman’s theory of general intelligence is known as the two-factor theory and states that general intelligence or “g” is correlated with specific abilities or “s” to some degree. All tasks on intelligence tests, whether related to verbal or mathematical abilities, were influenced by this underlying g factor.”
Except it is not root mean square that applies, it is root sum squared.
RMS generally applies to sine waves and AC electrical power.
RSS generally applies to quantify min/max errors.
They have always made it up, nothing new there. No temperature rise for 100 years, and no sea level rise since 1970.
Now take away the number you first thought of…
What is not a fantasy is thew collapse of the German car industry along with the entire German economy.
German politicians cannot see this economic trainwreck coming. They are locked in to reducing CO2 despite the horrific negative consequences. They are fanatics.
And their fanaticism is not based on scientific data, it is based on speculation and assumptions and unsubstantiated assertions about CO2 and its interaction with the Earth’s atmosphere.
In other words, they are going off half-cocked and trashing their own economy in the process.
The same goes for the fanatic CO2-phobe UK politicians. They are heading down the same road as the German politicians.
CO2-phobia brought them low.
Definition of Phobia:
“A phobia is an anxiety disorder that causes an irrational and persistent fear of an object, situation, or activity.”
That’s what Climate Alarmists are suffering from, to the detriment of the rest of us.
Is this an example of an AI hallucination or simply GIGO (Garbage In, Garbage Out)?
I say it’s good old-fashioned GIGO. The AI didn’t invent nonsense. It was fed nonsense. The models don’t know what’s real and what isn’t.
Real intelligence can apply scientific method to test conflicting information for self-consistency, can critically inspect data quality for accuracy and precision and methodological collection errors, and can also differentiate between measurements and assumptions. Real intelligence can apply conscious effort to reduce the number of underlying assumptions needed for self-consistent data interpretation (Occam’s razor). Current AI versions indiscriminately distill large databases for an average version of the most frequently encountered answer. They only use their so-called “intelligence” in this last step of verbalizing the average answer.
All AI can do is an assessment based on the preponderance of the evidence.
If there are multiple repeats of a whatever, it counts each as a unique source.
AI is incapable of catching its own errors.
I wonder if an AI could figure out how to divide 1 by 0?
Now there’s a headline I couldn’t have imagined 20 years ago. 🙃
“Using historical simulations from the CMIP6 archive (Coupled Model Intercomparison Project), the team trained CRAI to reconstruct past climate data.”
This is the problem, trained by whom for what purpose? For me AI is little more than accessing the creator’s views. Proper observations and measurements are the only acceptable recording methods.
The authors of the study has put all of their code up on github for anyone to download. You can train it yourself using whatever data you like and for whatever purpose you want. So if you want to think it is accessing the creator’s view then should also think that it can access your own views.
It is still just FABRICATING AN OUTCOME to suit a purpose.
When are you going to wake up to reality.
How can you continue to be so incredibly GULLIBLE. !
Is you mind still stuck at pre-teen level?
Don’t answer, because you obviously wouldn’t know.
“You can train it yourself using whatever data you like and for whatever purpose you want.”
ROFLMAO
Poor Izzy-dumb has just said that the output is purely dependant on the junk data and junk purpose of the user.
Talk about foot in mouth disease. A FAILURE of monumental proportion !
Then and again, perhaps he has shown that he does have some tiny understanding of the FAKE world of climate models etc !!
Of course the output is purely dependent on the input. Suppose all the program did was to do infilling by simple linear interpolation. If you fed it junk data it would return the average of the junk data. If you gave it real data it would return the average of the real data. Doing infilling using a neural net is no difference in principle than taking a weighted average of the nearby stations. The only difference is that the weights are not predetermined in advance but rather the are generated by comparing with training data.
There is no such thing as a weighted “average” of an intrinsic property such as temperature.
Creating an average requires summing the values of a property. You cannot sum an intensive property like temperature. If the temperature in Berryton, KS is 70F and in Topeka is 73F (about 10 miles apart) you simply can’t add 70F to 73F and get a total of 143F, it’s a nonsense number. The average is *NOT* 72F since there is no such value as 143F. There is not even a guarantee that you can find a point between Topeka and Berryton where the temperature is 72F.
If I have a mass of 2kg in Berryton and a mass of 5kg in Topeka I *can* add the two masses and get 7kg. Thus I can state the average value of the two masses is 4kg.
You betray your training to be in statistics and not physical science. Statisticians believe you can average any set of numbers – i.e. numbers is numbers. Physical scientists understand you can only average extensive properties.
“There is not even a guarantee that you can find a point between Topeka and Berryton where the temperature is 72F.”
Actually there is a guarantee. It is the intermediate value theorem from elementary maths. As long as the temperature is a continuous function then there will always be a place between Topeka and Berryton where the temperature is 72F. And there is no physically way the temperature could be discontinuous.
Do you know what a thermocline is? I didn’t say there wasn’t a midpoint. I said there is no guarantee that you can find it! If you can’t find it then how can you use it to infill the temperature at any specific point between Topeka and Berryton? All this kind of statistical garbage does is contaminate the infill point with the combined measurement uncertainties of the points used to generate the average PLUS an additive factor based on the fact that you cannot assume a smooth gradient of temperature between two different points let alone multiple points. It just magnifies the Garbage In, Garbage Out output!
You haven’t proven that you can take an average of temperatures at different locations and use the average to infill the temperature at any specific point between the locations. All you’ve done is spout handwaving FM.
AI is great for pattern recognition.
It’s also called fraud.
HAHAHAHAHAHAHAHAHAAAAAAAA!
Do these
expertsdunderheads not understand that RMSE calculations require a true value?Fake Data taken to the extreme.
So, the AI generators can get it wrong way faster than humans? That’s an achievement!
So, does this mean only 985 billion is needed instead of 9.85 trillion?
Deus ex machina comes into its own.
Train an AI on wrong, expect it to produce right.
The standard motif of modern climatology finds it logical extreme.
Averaging wrong results can’t produce a correct average. How many AI algorithms get trained on measurement uncertainty as apposed to sampling error.
Climatology is a liberal art, not a quantitative physical science.
It is worth pointing out that the title of the blog post is misleading and wrong. Generative AI is not being used. In the article they state that “Depending on the in lling task, either CNN (Convolutional Neural Network)-based or GAN (Generative Adversarial Network)-based approaches are generally adopted. … In this study, we are opting for a CNN-based approach”. So no generative AI is being used. It is just standard machine learning and is not a large language model so all the stuff about AI hallucinations simply does not apply.
It is also worth being clear about what they are trying to do. It is to solve the common problem of infilling gaps in data that are the result of sparse measurements. There are any number of ways to do this and one way is to train a neural net to make predictions about what the values should be when given information about neighbouring points. It is no different from what the brain does when it fills in missing visual information to make it look like we have a complete high resolution picture of the world around us when in fact our eyes only focus on tiny regions at any one time.
You are regurgitating GIBBERISH that you have absolutely ZERO UNDERSTANDING of.!
Emulating the Kamal word-salad.
Hilarious ! 🙂
And again he desperately searches for an ad hom with which to project his inconsequence in the sad belief that it matters a jot, other than to his ego and the need some psychotherapeutic need to vent his anger in a certain direction.
Oxy, do try to not project your own inadequacy upon others, eh.
There’s a good boy.
Though it is endlessly entertaining and does most denizens of this place no favours.
Banton again shows he has zero clue what any of this stuff is.
You really are just an ignorant and gullible twit, aren’t you.
I have worked with neural networks, network linear programming, genetic optimisation of multi-decision multi-objective scenarios… etc
I am betting you are TOTALLY CLUELESS about any of them, as you are of basically everything else.
You are an ignorant mutt, nothing more, and almost certainly far less.
Somewhat like homogenization of the temperature records blending poor station data with pristine data to “infill gaps”.
“It is to solve the common problem of infilling gaps in data that are the result of sparse measurements.”
“predictions about what the values should be”
Infilling of intensive properties is impossible. How would the AI algorithm use temperatures in Colorado Springs to infill temperatures at the summit of Pikes Peak, just a few miles away. How would the AI algorithm use temperatures on the north side of the Kansas River valley to infill temperatures on the south side of the valley, just a few miles away.
What you are describing is a Garbage In, Garbage Out process. It suffers from the same base problem that climate science does in trying to determine a “global temperature average”. You can’t average intensive properties. Determining an average requires determining a sum of a property before dividing by he number of items in the data set. If I have temperatures 20C, 30C, and 10C their sum is *NOT* 60C – you simply can’t add intensive properties in that manner. There is no “average” of 20C. It’s not like having masses of 10kg, 20kg, and 30kg – you *can* add those and get 60kg. Mass is an extensive property.
Just because you can do addition of numbers on a blackboard it doesn’t mean that the addition makes any physical sense – unless you are a climate scientist.
With proper input and assumptions, one can infer the amount of energy from temps, and possibly interpolate energy content for missing location and back infer temps. They key is the validity of the additional input on the amount of matter, heat flow conditions etc.
Of course you can infill intensive properties. Suppose you have an iron bar that has a length of 1 metre and you heat one end up to 100C and you keep the over end at a constant temperature of 0C. Now because the temperature changes continuously along the length of the rod you can infill the temperatures to make an estimate of the temperature along the length. The simplest approach would be to use a weighted average of the end temperatures but you could come up with more complicated formula if you like.
And you assume this logic can be applied to the earth’s atmosphere.
weather forecasters do it every day. They have a finite number of temperature stations and then use interpolation to work out the temperature at places in between. Which is fine since the temperature is a continuous function of position. You can always fit the temperature data using a finite set of spherical harmonics for example and then use those spherical harmonics to interpolate the temperature field between the points. This will give you an approximation to the local temperature at arbitrary points on the global. How accurate that is will depend on how many points you use.
Weather forecasters do *NOT* do this every day. The closest they get is saying “the temps will be in the 50’s in NE Kansas”. They don’t try to guess at specific temps at specific locations, certainty not to the thousandths of a degree!
“Which is fine since the temperature is a continuous function of position.”
Temperature is *NOT* a continuous function of position. You, and apparently most of climate science, have no basic understanding of physical science at all.
Temps on the north side of the Kansas River valley can be decidedly different than temps on the south side because the environmental conditions at any point in time can be drastically different. You simply cannot assume a homogeneous atmosphere between the two areas. Infilling temps from the north side to locations on the south side does nothing but INCREASE the measurement uncertainty of the infilled values!
This is why it is so important for climate science to abandon the use of temperature as a proxy for climate. The extensive property that should be used is enthalpy – which includes factors such as humidity – which is actually a measure of an extensive property known as heat.
You are still betraying your training to be as a statistician.
What you are actually measuring in the rod is the transfer of heat and not temperature. The heat transfer *determines* the temperature along the rod and heat is an extensive property. Temperature doesn’t “flow” along the rod, heat does.
You forgot to consider that performing your math requires you to assume a homogeneous rod with no impurities or joints in the rod – assumptions that don’t apply to the atmosphere. If your rod is not homogeneous then how do you calculate an “average” temperature?
Better yet, what does that average temperature tell you physically? Where in the rod does that “average” temperature exist? If that rod material is a very good insulator then where in that rod does the “average” temperature exist? Midway along the rod?
What if the rod is actually a cylinder of vacuum defined by a perfect insulating material? You insert a probe at 100C at one end. What is the temp at the other end? What is the average temp of the vacuum along the cylinder?
This is meant to try and get you to see that you are actually measuring an extensive property of the rod and not an intensive property.
I’ll ask again, how do you assume a homogenous atmosphere between two points so you can “infill” an average value of an intensive property?
Tim,
I am not calculating the “average temperature” but rather interpolating the temperature given two known values at two points. Since the time of Fourier we know we can write the temperature distribution using a Fourier series and then use that sum to calculate the temperature at arbitrary points along the rod.
The same is true with the atmospheric temperatures. The temperature is a continuous function of position (although there are points where the rate of change is large) and as such can be written as a sum of spherical harmonics (or any other set of basis functions). Given a finite number of values you can perform a least squares fit and come up with coefficients of the basis functions. Using these coefficients you can infill the temperature at different points and compare this to measured values. This can be done for any continuous function and there is nothing mysterious about it.
“Interpolating” implies you know the gradient between two points and can generate a value based on that known gradient.
That simply doesn’t apply in the real world. If it did then two carbon resistors will have a value you can “interpolate” somehow. The problem is that is impossible, even with objects taken from the same manufacturing run. Random fluctuations in the material will prevent establishing a known gradient that can be used to interpolate values.
Again, being a continuous function does *NOT* mean that the gradient is known at all points. If you don’t know the gradient at all points then you can’t interpolate anything that doesn’t have an additive measurement uncertainty factor from doing so.
A least squares fit implies a linear gradient. Not everything in the world can be described by a linear gradient, especially where something like a river valley generates different wind velocities, different humidity vlaues, and different pressures at different locations along the valley and on each side of the valley. Each of those factors will modulate the temperatures at the different locations. Terrain and geography are important factors in determining gradients of each and every factor involved (think Pikes Peak and Colorado Springs) and are completely ignored when you just average temperatures of different locations assuming a linear gradient exists because the earth is flat and everything is totally homogenous.
“compare this to measured values. “
If you know the measured values then why are you generating “infill” values to use instead? If you don’t know the measured values at a point then what are the factors you are comparing?
Another word salad.
Had you stopped after the first 4 sentences, you would have made a valid point.
This is fraud, not science.
PRECISELY!
A fancy way of creating FAKE DATA..
I suspect the operators aren’t even aware just how FAKE the junk they are generating is.
I was going type “FAKE DATA” but now I see you already have.
They got another line for the ole resume’, success achieved.
creating outputs that are nonsensical or altogether inaccurate.
I believe I mentioned elsewhere – but I have zero trust in AI (LLM I guess) after ChatGPT told me all sorts of stuff about the business I own and almost all of it was false.