Guest essay by Eric Worrall
According to the big computer we are doomed to suffer ever more damaging weather extremes. But researchers can’t tell us exactly why, because their black box neural net won’t explain its prediction.
Human activity influencing global rainfall, study finds
Anthropogenic warming of climate has been a factor in extreme precipitation events globally, researchers say
Charlotte Burton
Wed 7 Jul 2021 15.00 AEST…
While there are regional differences, and some places are becoming drier, Met Office data shows that overall, intense rainfall is increasing globally, meaning the rainiest days of the year are getting wetter. Changes to rainfall extremes – the number of very heavy rainfall days – are also a problem. These short, intense periods of rainfall can lead to flash flooding, with devastating impacts on infrastructure and the environment.
“We are already observing a 1.2C warming compared to pre-industrial levels,” pointed out Dr Sihan Li, a senior research associate at the University of Oxford, who was not involved in the study. She said: “If warming continues to increase, we will get more intense episodes of extreme precipitation, but also extreme drought events as well.”
Li said that while the machine-learning method used in the study was cutting edge, it currently did not allow for the attribution of individual factors that can influence precipitation extremes, such as anthropogenic aerosols, land-use change, or volcanic eruptions.
The method of machine learning used in the study learned from data alone. Madakumbura pointed out that in the future, “we can aid this learning by imposing climate physics in the algorithm, so it will not only learn whether the extreme precipitation has changed, but also the mechanisms, why it has changed”. “That’s the next step,” he said.
…
Read more: https://www.theguardian.com/environment/2021/jul/07/human-activity-influencing-global-rainfall-study-finds
The abstract of the study;
Anthropogenic influence on extreme precipitation over global land areas seen in multiple observational datasets
Gavin D. Madakumbura, Chad W. Thackeray, Jesse Norris, Naomi Goldenson & Alex Hall
The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models, but highly challenging to detect in the observational record. Large internal variability distorts this anthropogenic signal. Models produce diverse magnitudes of precipitation response to anthropogenic forcing, largely due to differing schemes for parameterizing subgrid-scale processes. Meanwhile, multiple global observational datasets of daily precipitation exist, developed using varying techniques and inhomogeneously sampled data in space and time. Previous attempts to detect human influence on extreme precipitation have not incorporated model uncertainty, and have been limited to specific regions and observational datasets. Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets. Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation.
Read more: https://www.nature.com/articles/s41467-021-24262-x
As an IT expert who has built commercial AI systems, I find it incredible that the researchers seem so naive as to think their AI machine output has value, without corroborating evidence. They admit they are going to try to understand how their AI works – but in my opinion they have jumped the gun, making big claims on the basis of a black box result.
Consider the following;
Amazon ditched AI recruiting tool that favored men for technical jobs
Specialists had been building computer programs since 2014 to review résumés in an effort to automate the search process
…
Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.
…
But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.
That is because Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized résumés that included the word “women’s”, as in “women’s chess club captain”. And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter.
Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.
…
Read more: https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine
In hindsight it is obvious what happened. The Amazon AI was told to try to select the most suitable candidates, and it noticed more male candidates were being accepted for technical jobs, likely because there were more male candidates applying. So it concluded men are more suitable for technical jobs.
It is important to note this male bias in technical jobs is purely a Western cultural issue. When I visited a software development shop in Taipei, there were just as many women as men developing software. The women I have met, in Western IT shops and in that IT shop in Taipei, were just as smart and technically capable as any man. Somehow we are persuading our women not to pursue technical careers.
My point is, when scientists unleash a black box AI on a set of data, they have no way of knowing whether the output of that AI is what they think it is, until they painstakingly rip the AI apart to work out exactly how it formed its conclusions.
The climate scientists think they have discovered a significant camouflaged anthropogenic influence. Or they may have discovered a large hidden bias in their data or models. To be fair they admit there might be problems with their training data, and the climate models they use to hindcast what conditions would have been without anthropogenic influence. “… In addition, the training GCMs might be undersampling the low-frequency natural variability such as Atlantic Multidecadal variability and Pacific Decadal Oscillation. …“. This admission should have been their headline.
Until they break their black box system down, work out exactly how their AI is reaching its conclusion, and present the real method for review, the method which is currently hidden inside their AI, it seems remarkably premature to go for a big announcement, just because they like the look of their result.
AI= barely adequate imitation
But the only way to detect the anthropogenic climate change.
Maybe it is called artificial for a reason. It cannot work outside the box, or think.
From the above-quoted abstract of the study (it does not merit the title “scientific research”) by Madakumbura, et. al.:
“Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets. Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation.”
Hmmm . . . it is certain that in the past Earth has never entered a glacial period, let alone a true Ice Age, without the associated lower atmospheric temperatures causing a majority of water vapor then in Earth’s atmosphere to globally precipitate out as rainfall or snow . . . the predominate mechanism for the formation of glaciers and ice sheets.
Just consult a pyschrometric chart or table of absolute humidity versus air temperature, based on a 10-12 °C decrease from today’s GLAT of about 12 °C . . . alternatively, just consider how little water vapor air can hold at the temperature of water’s freezing point of 0 °C.
That being the case, it would be a wonderful idea to test this new, assumed-to-be-objective AI (aka “machine learning”) “tool” against the conditions leading to the last glacial interval on Earth. I’m betting 10:1 that, if left unmodified, the current AI “tool” discussed in the above article would also “efficiently generate multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation”, starting about 120,000 years ago.
“Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets.
So we used AI to find “the time evolution” ( aka a trend ) in precipitation and the WE attributed that trend to AGW.
Red scarf trick, climate “science” 101.
“Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation”.
If it says what I believe and therefor must be right.
I believe, therefor I know.
What could be simpler, and completely idiotic at the same time.
Lysenko rises from the grave to save us from success.
Models are real. Reality is fake.
The Black Box cannot think, create, or imagine. It can only calculate based on mathematical input. Depending on a brilliantly stupid machine system for answers is like asking HAL3000 to open the pod bay doors.
I suggest that all these “brilliant” science persons go back to using slide rules, blackboards, and pencils with erasers, and not be allowed to touch or go near a computer of any kind – not even an IBM with A/B drives that uses floppies – until they learn to use their brains.
We are not doomed. They are. Dependence on AI squelches imagination and creativity.
Sad.
When one asks a question of Deep Thought, they need to be very careful how they word it. What one gets as an answer may appear nonsensical, which it may well be. GIGO!
Nothing can go wrong (click) Nothing can go wrong (click) Nothing can go wrong (click) ….
By carefully choosing the data the AI reviews, AI’s can be trained to find anything.
The fact that it can’t actually explain why it found something, in climate science, is considered an advantage.
Funny, that. What is it that the AI uses to determine that whatever data it is fed to it is caused by AGW? Any variation of a given climatic metric has many possible causes, and one must input the assumptions that they are caused by Man. Otherwise, how would the AI determine it is AGW? It appears they use GCMs as input because, as they assert, GCM results are “robust” and observations don’t support the AI conclusions.
“Somehow we are persuading our women not to pursue technical careers”
Well, we’ve been telling them for decades that they won’t be paid as much as men in the fields, will be discriminated against in school and for job openings and won’t receive promotions. All of these things are likely false (the “wage gap” has been debunked so many times it’s amazing to me that the myth persists) but when women have been force fed these beliefs for so long, is it any wonder that many of them decide “why bother” if they think they’re going to be constantly undermined if they enter a science or engineering field anyway?
Makes much more sense to coast through college with a degree that ends in “studies” where they’ll be welcomed with open arms right?
So CAGW comes down to the realization it is a new form of ransomware with everyone as victims.
Look at the difference between the title and the first sentence of the abstract. This is from a “scientific” paper, not the usual semi-literate stuff we see from journalists or press-release writers.
“Anthropogenic influence on extreme precipitation over global land areas seen in multiple observational datasets”
“The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models, but highly challenging to detect in the observational record”
Although my mental processes are still a bit foggy, thanks to the covid-19 I had in April last year (so, my apologies if I misread it), the authors appear to me to be juxtaposing two diametrically opposed statements. In other words, the title of the paper is making a claim that the authors themselves cannot substantiate.
Good grief. This looks to be a new low point for climate science (and that is saying something!). It’s doublethink in action, live in 2021!!
============
Eric makes very valid comments about how they should unpack their AI program to see what it’s actually doing. I offer this little anecdote from the AI community:
I was asked to look at some mineral exploration targets in an area of the Canadian Shield, that had been generated by an AI company. I had previously looked at the area using only my general knowledge of geology and mineral exploration based on 40-odd years of practical experience, and I was not impressed. I thought (actually, I knew with 97% certainty) that the AI had relied too much on a single parameter (which was IMHO the one parameter they should NOT have used). I asked about what databases the AI used and how did it select targets. I was told (this is me paraphrasing it) that they didn’t know how the AI made its selection, and that they weren’t supposed to know, because that was the point of AI.
“Ignorance is knowledge”
Garbage in-Garbage out… This does not take a genius to figure out. They are feeding in manipulated data and the AI is simply performing pattern recognition of what they are doing to the data.
From the article’s abstract: “Previous attempts to detect human influence on extreme precipitation have not incorporated model uncertainty, and have been limited to specific regions and observational datasets.” What are the odds that a model trained to detect human influence on extreme precipitation will detect human influence on extreme precipitation?
Watch this clip about AI and see if these academics have more work to do.
“I have detected Anthropogenic Climate Change.
I’m sorry, Dave, but I can’t let you burn any more fossil fuel.”
–
Is this program named HAL, by any chance?
I believe this article https://www.antipope.org/charlie/blog-static/2021/03/lying-to-the-ghost-in-the-mach.html has bearing on the subject. Evidently, it’s easy to lie to AI enough that its subsequent output is worthless.