Claim: Machine Learning can Detect Anthropogenic Climate Change

Guest essay by Eric Worrall

According to the big computer we are doomed to suffer ever more damaging weather extremes. But researchers can’t tell us exactly why, because their black box neural net won’t explain its prediction.

Human activity influencing global rainfall, study finds

Anthropogenic warming of climate has been a factor in extreme precipitation events globally, researchers say

Charlotte Burton
Wed 7 Jul 2021 15.00 AEST

While there are regional differences, and some places are becoming drier, Met Office data shows that overall, intense rainfall is increasing globally, meaning the rainiest days of the year are getting wetter. Changes to rainfall extremes – the number of very heavy rainfall days – are also a problem. These short, intense periods of rainfall can lead to flash flooding, with devastating impacts on infrastructure and the environment.

“We are already observing a 1.2C warming compared to pre-industrial levels,” pointed out Dr Sihan Li, a senior research associate at the University of Oxford, who was not involved in the study. She said: “If warming continues to increase, we will get more intense episodes of extreme precipitation, but also extreme drought events as well.”

Li said that while the machine-learning method used in the study was cutting edge, it currently did not allow for the attribution of individual factors that can influence precipitation extremes, such as anthropogenic aerosols, land-use change, or volcanic eruptions.

The method of machine learning used in the study learned from data alone. Madakumbura pointed out that in the future, “we can aid this learning by imposing climate physics in the algorithm, so it will not only learn whether the extreme precipitation has changed, but also the mechanisms, why it has changed”. “That’s the next step,” he said.

Read more: https://www.theguardian.com/environment/2021/jul/07/human-activity-influencing-global-rainfall-study-finds

The abstract of the study;

Anthropogenic influence on extreme precipitation over global land areas seen in multiple observational datasets

Gavin D. MadakumburaChad W. ThackerayJesse NorrisNaomi Goldenson & Alex Hall 

The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models, but highly challenging to detect in the observational record. Large internal variability distorts this anthropogenic signal. Models produce diverse magnitudes of precipitation response to anthropogenic forcing, largely due to differing schemes for parameterizing subgrid-scale processes. Meanwhile, multiple global observational datasets of daily precipitation exist, developed using varying techniques and inhomogeneously sampled data in space and time. Previous attempts to detect human influence on extreme precipitation have not incorporated model uncertainty, and have been limited to specific regions and observational datasets. Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets. Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation.

Read more: https://www.nature.com/articles/s41467-021-24262-x

As an IT expert who has built commercial AI systems, I find it incredible that the researchers seem so naive as to think their AI machine output has value, without corroborating evidence. They admit they are going to try to understand how their AI works – but in my opinion they have jumped the gun, making big claims on the basis of a black box result.

Consider the following;

Amazon ditched AI recruiting tool that favored men for technical jobs

Specialists had been building computer programs since 2014 to review résumés in an effort to automate the search process

Amazon’s machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized résumés that included the word “women’s”, as in “women’s chess club captain”. And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. 

Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

Read more: https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine

In hindsight it is obvious what happened. The Amazon AI was told to try to select the most suitable candidates, and it noticed more male candidates were being accepted for technical jobs, likely because there were more male candidates applying. So it concluded men are more suitable for technical jobs.

It is important to note this male bias in technical jobs is purely a Western cultural issue. When I visited a software development shop in Taipei, there were just as many women as men developing software. The women I have met, in Western IT shops and in that IT shop in Taipei, were just as smart and technically capable as any man. Somehow we are persuading our women not to pursue technical careers.

My point is, when scientists unleash a black box AI on a set of data, they have no way of knowing whether the output of that AI is what they think it is, until they painstakingly rip the AI apart to work out exactly how it formed its conclusions.

The climate scientists think they have discovered a significant camouflaged anthropogenic influence. Or they may have discovered a large hidden bias in their data or models. To be fair they admit there might be problems with their training data, and the climate models they use to hindcast what conditions would have been without anthropogenic influence. “… In addition, the training GCMs might be undersampling the low-frequency natural variability such as Atlantic Multidecadal variability and Pacific Decadal Oscillation. …“. This admission should have been their headline.

Until they break their black box system down, work out exactly how their AI is reaching its conclusion, and present the real method for review, the method which is currently hidden inside their AI, it seems remarkably premature to go for a big announcement, just because they like the look of their result.

4.6 25 votes
Article Rating
119 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John the Econ
July 7, 2021 10:28 pm

When I was in school so many decades ago, I was frequently required to show my work to demonstrate how I came to certain conclusions. With anything less, the answer was considered to be little more than a guess.

Perhaps these AI systems should be required to show their work as well.

dk_
Reply to  John the Econ
July 7, 2021 10:40 pm

Anytime one is working with software, the way to show the work is to allow an auditor complete access to software source code, design documentation, and trouble reports, and to runtime parameters, all test data, and have all runtime executions witnessed by a third party with multiple signature signoffs. Somehow, academics in climate modeling don’t have to do any of this. AI means “we don’t know how it works, but it looks good” in vaporware land.

Russell
Reply to  dk_
July 7, 2021 11:07 pm

Try making that observation on Ars Technica forums and see how you get flamed. Just the first sentence will get 100 plusvotes but adding the 2nd sentence will reverse that to 1000 negvotes. Go figure the techo biases in this world.

dk_
Reply to  Russell
July 8, 2021 3:51 am

Would that be the same Ars Technica that is now featuring under the heading SCIENCE, an article about the thorny ethics of publicly displaying Egyptian mummies?

mcswelll
Reply to  dk_
July 8, 2021 6:25 am

Most AI systems are open source. So yes, you can see the code. Training and test data can be different, depending on the study.

Bryan A
Reply to  mcswelll
July 8, 2021 12:42 pm

Hmmm
Teach a computer that 1+1=3 and it will always see 1+1=3.
Teach a computer to produce a Hockey Stick and it will see global warming
Teach AI to see global warming and it sees global warming regardless of the input.

mcswelll
Reply to  Bryan A
July 8, 2021 6:33 pm

That’s not how it works. You don’t “teach” the computer that 1+1 equals anything, nor do you “teach” it to produce a hockey stick. For machine learning, you provide it training data. Now that data can be biased or wrong, as I already mentioned, but that is not the same as what you’re describing.

dk_
Reply to  mcswelll
July 8, 2021 1:31 pm

Er, no. Many AI software packages are indeed open source, but not the so-called scientific sort referred to here. Most academic AI software used in climate model prediction is protected by data protection laws, copyright, patent, and academic publication practices. Most commercial AI systems are intellectual property.
Few of these systems are ever publicly validated or verified, the opposite of open scientific publication information sharing in any other field. Patented software source code is viewed as a possible source of income by the Universities, and is required, often by the institution or the entity providing the research grant, to be protected (although quite badly, as academic software is possibly also the most subject to industrial espionage). I know of very few places or researchers who publish source code or verification/validation data, or even the identity of the software coders.
I do know of a recent, highly successful public program that took government grant money for several years to fund undergraduate IT students to create mission critical code, but deliberately trashed it all and hired a professional firm to actually do the job at the last minute. The university brought in great money for that project — and still is — but the unused software source coded for most of the program has been just another government boondoggle.

niceguy
Reply to  dk_
July 8, 2021 3:23 pm

Patents and so called “intellectual property” are a mine field.
Far from solving the problem, the very ugly chamber of commerce admin (so called “Trump” admin) tried to make it worse.

mcswelll
Reply to  dk_
July 8, 2021 6:37 pm

I have to admit that you sound like you know what you’re talking about. I was thinking of systems like TensorFlow, Keras, PyTorch, scikit-learn, and similar. Which proprietary ones are you referring to?

dk_
Reply to  mcswelll
July 8, 2021 10:47 pm

Sorry I’ve used none of those. The three proprietary systems that I’ve worked on were each written from the ground up for the purpose (all pattern recognition in different domains) in mixed third-level languages — some C, C++, and Fortran, but several other languages and packages were needed for the final system integrations. All were somewhat dated, and probably quite primitive compared to what is available to commercial users today.
But systems I’ve worked with are not my intended point. Climate scientists don’t seem to offer their software for examination; which should be the minimum to obtain scientific validation through reproducibility. I’ve read the scientific papers of several presentations, and seen only one that offered source code for examination by academic inquirers — not AI incidentally — but none that offered validation, verification, development environment, or even credited the programming team. In Rud Istvan’s book Blowing Smoke [Amazon link], essay Climatastosophistry, one of the climate conspirators’ e-mails is quoted as stating that they would keep their modeling source code secret using the data protection act. While also not AI, this is only a single example of what is now standard climate science practice: the deliberate failure to provide scientific reproducibility for a scientific claim.
My point is from the perspective of vaporware: the AI label (in this case Machine Learning is the AI technique claimed) is doubletalk almost invariably used merely to exaggerate the claim.

mcswelll
Reply to  dk_
July 9, 2021 5:51 pm

Thanks, I appreciate the reasoned (and reasonable) response!

FWIW, the data protection act (the US and the UK ones, don’t know about others) is for the protection of data about living people; to my knowledge, it is not intended to protect other kinds of data. In particular it wasn’t intended to protect meteorological data or source code of any sort. (Source code can however be copyrighted and in some cases patented.) So if that’s the act the conspirator is referring to, it’s certainly a misuse of the act.

bill Johnston
Reply to  dk_
July 9, 2021 7:48 am

Sorta like confirmation bias, isn’t it?

stinkerp
Reply to  John the Econ
July 8, 2021 12:37 am

There is nothing magical about so-called “artificial intelligence”. Machine learning or AI is simply another human-created algorithm that chews through volumes of data to identify patterns and make decisions based on those patterns. The patterns it finds are predetermined by the algorithm and the priority or weight it assigns to certain kinds of data over others and most importantly, by the data you give it to “train” on. If you feed it data biased a certain way, then—surprise!—it “learns” those biases and makes decisions based on those biases. Given the subjective bias of most climate scientists alarmists, it isn’t surprising that they fed it data sets that incorporated those biases and the machine learning algorithms detected those biases.

This is no different than climate scientists alarmists who ignore data from the real world demonstrating that natural variation dominates climate and focus entirely on the artificial world created in their models.

mcswelll
Reply to  John the Econ
July 8, 2021 6:29 am

You’re correct that it can be difficult to know how a machine learning system reached its output; the intermediate stages can be opaque, although there are ways in neural net systems to see that. On the larger topic of explaining results, that is and has been the subject of a lot of research, e.g. DARPA’s Explainable Artificial Intelligence (XAI) program (https://www.darpa.mil/program/explainable-artificial-intelligence).

dk_
July 7, 2021 10:35 pm

Indistinguishable from magic. Did they explain about the chicken entrails?

Pat from kerbob
Reply to  dk_
July 7, 2021 10:45 pm

Good with onions and garlic

pigs_in_space
Reply to  dk_
July 7, 2021 11:04 pm

Feynmann would have said, “it is wrong”, then explained extremely simply why.

dk_
Reply to  pigs_in_space
July 8, 2021 5:49 am

Berra simply said that predictions are hard, especially about the future. People are getting paid big money for just about anything labeled as artificial intelligence, and are understandably reluctant to admit that no one can define or measure human intelligence, yet alone determine a reliable way to detect it in any other non-human instance. Perhaps it is more reflective of something from Gell-Mann (or Mary Shelley) but it is hard to imagine that a group or an individual could create a complex system more capable or intelligent than any of its creators. Hubris, cultism, quackery, or magic are more likely to be at work here. Or, as you say, they are just wrong.

Jim Gorman
Reply to  dk_
July 8, 2021 10:34 am

The fact that they are using an AI is a direct admission that they do not have a handle on how temperature is physically determined. The models must be junk!

How would AI be either trained properly or given the real data needed to find real world relationships? Clouds for instance.

Rich Davis
Reply to  dk_
July 8, 2021 12:50 pm

Yes this is junk science, and propaganda.

But to your point about AI more intelligent than its creators…Intelligent isn’t exactly the right terminology. I think it is a matter of speed and rigor. We are all prone to taking mental shortcuts when a problem is complex. Our shortcuts usually degrade the accuracy of our results. Our lack of logical rigor leads to errors. We only do it because we are not fast enough to do the job right, and it often leads to better outcomes than just giving up on the problem.

Unless you actually program the computer to take the same kind of sub-optimal shortcuts for the sake of speed, it is going to be slavishly obedient to the algorithm and is going to apply the logic perfectly every time. Indeed, if you did program it to use a shortcut, the shortcut becomes the algorithm that it slavishly follows with perfect rigor, even if the logic is now flawed.

The computer is potentially much faster than the human who doesn’t take shortcuts and eventually arrives at the same right answer. We reasonably consider the person who answers first as being the smart one and the one who is slow to answer as the dullard even when they both arrive at the exact same answer.

Beyond just applying the algorithm that a human had to be smart enough to describe and program, the AI routine can in theory apply meta-algorithms or algorithms to modify algorithms. I think of this as a meta-algorithm called the scientific method that can adjust algorithms about a particular model of a process being studied. The scientific method is nothing that the human creator didn’t grasp, but it could be done much faster and with rigor. Plus the lifeblood of the scientific method is data, which the computer can process so much faster and remember so much better. It is conceivable that the model that could be developed through that process could prove to be more complex than the programmer can grasp mentally, and contain more data than the programmer could process in a lifetime.

So it’s not magic, but it is a powerful new tool and not just regurgitating what we programmed the computer to output like a basic word processor. We might also think of it as analogous to using hydraulic pressure. A human built the machine, but it can lift things that the machinist could never lift. We still understand the things that the machine does as human achievement.

dk_
Reply to  Rich Davis
July 8, 2021 1:46 pm

I don’t disagree with your points regarding computer processing and analysis, but do maintain that the label of Artificial Intelligence is mostly salesmenship or confidence gamesmanship. You can only be sure that it is correct after you can show and reproduce the work.
I think real AI will only be possible after we figure out what human intelligence really is — probably about 50 years past the 10 years that practical fusion always lies in the future. But that doesn’t mean that “AI” software techniques can’t be (or haven’t been) used to create an Orwellian surveillance state.
Regardless, they won’t be able to tell the future any better, and probably less clearly, than Nostradamus — which is my personal interpretation for software of Gell-Mann on complex adaptive systems.

Rich Davis
Reply to  dk_
July 8, 2021 9:13 pm

no argument from me on that. Mostly marketing and if-then loops foisted as AI on idiot investors.

Well one disagreement. AI will probably figure out right away that fusion power can never be commercially viable 🙂

mcswelll
Reply to  dk_
July 8, 2021 6:34 am

I assume you’re referring to Clarke’s Third Law, “Any sufficiently advanced technology is indistinguishable from magic”, correct? Meaning that AI is such advanced technology that you don’t understand it, and you’re in the position of a cargo cult leader: someone who saw airplanes and thinks they’re magic.

dk_
Reply to  mcswelll
July 8, 2021 7:37 pm

I was meaning that these scientists admitted that they don’t know what their own software is doing, but declared their fortune telling as having a scientific basis. Magic because it is the same as saying AI, these days: a misunderstood term used in performance art to label poorly understood technology to fool an audience, usually for money.
Clarke was the one who formulated the declaration you cite that was later referred to as his third law, but he wasn’t the first, nor the only one of his contemporaries who described the same thing in roughly the same way.
Cargo cultists are poorly docmented, but accounts (thinking of Jared Diamond) tended to have them speak of cargo as a religious experience: an inexplicable, mostly good thing that just seemed to happen from time to time. At least it begins as observation of a phenomenon. Magic involves both misdirection (sometimes malicious) by the perpetrator, and gullibility and ignorance of the witness. Observation and measurement (iow fundamentals of science) aren’t required. Similar, but not the same.
Not sure that cargo cultists had or have leaders. Certainly Anthropogenic Global Warming death cultists do.

Jon Salmi
Reply to  dk_
July 8, 2021 12:19 pm

I get it now. AI is simply a modern, bloodless form of haruspicating.

dk_
Reply to  Jon Salmi
July 8, 2021 10:16 pm

Good one. Had to look it up, but yes. Reportedly used by both Alexander and Julius Ceasar to determine their future.

July 7, 2021 10:58 pm

AI & Machine Learning – so overhyped. If you read “The Man Who Solved The Market” by Gregory Zuckerberg, basically it took about 100 top mathematicians and programmers 10 years to put together a program that would make money of 51% of trades. – and they still manually intervene sometimes. Great if you are a hedge fund, and good someone put their money on the line, but colour me skeptical on machine learning for now.

george1st:)
July 7, 2021 11:11 pm

Computers can spit out whatever the programmer tells it to do .
Programmers are not scientists, just machine language experts .
The complexity of the Earths weather system with past , present and future climate cycles is far beyond any computation even with a 1000 Einstein programmers .
So they do what they are paid to do , make us all feel guilty for the world warming as we come out of an ice age because they can parallel the rise in CO2 .
The western world is in a fast state of decline , not from climate change but from all the enforcers of those that think they can change climate change .

mcswelll
Reply to  george1st:)
July 8, 2021 6:39 am

Computers can spit out whatever the programmer tells it to do”: False, that’s not how AI/ Machine Learning works. The programmer does NOT tell the computer what to do, only how to learn from data. What the program “spits out” after having learned from data is not anything the programmer put in.

“Programmers are not scientists, just machine language experts”: Not necessarily true. Some of us do both, and in any complicated program is usually written by a team that includes both domain knowledge experts and programmers, and likely some people with skills in both camps.

John Endicott
Reply to  mcswelll
July 8, 2021 7:29 am

Not false, just not worded to your liking. The programmer tells the computer what “rules” to follow in that “learning”. GIGO still applies, when a rule is biased or even simply wrong, the resulting output will similarly be biased or wrong. What choices the “experts” who come up with the specs & programmers who do the programming make can and does have a very big impact on the results. design or program in a bias (either unintentionally or deliberately) and the output will reflect that bias. GIGO.

mcswelll
Reply to  John Endicott
July 8, 2021 6:45 pm

No, the programmer does NOT feed “rules” to a machine learning system; that’s rule-based AI, and was part of the AI bubble in the 1980s. (I know, I was there.) Most modern machine learning systems rely on very generic sorts of things; they’re called neural nets, although it’s questionable whether they resemble actual neural nets in live brains. In any case, they’ve been used on numerous test cases where real results are known; the real results are split into training and test data (and sometimes development data), and they can be validated against the test data.

Machine learning systems can be shown to generally work, although in some cases they produce odd results on specially manipulated data, or because the data was poorly chosen. In that sense, GIGO applies–but GIGO has nothing to do with bad rules, rather it has to do with garbage data. (Indeed, that’s what the original meaning of GIGO was, decades ago: Garbage In–meaning bad input data–means Garbage Out.)

An example of bad data is face recognition algorithms that do poorly on non-white faces, because the training data included mostly white people: bias.

Jim Gorman
Reply to  mcswelll
July 8, 2021 12:32 pm

It isn’t false. First take a look at the data used. For example, clouds, does the AI have enough cloud data at a high enough resolution to make a valid connection with other data? Who programmed the AI to handle insufficient data that doesn’t have the required resolution?

If you as a programmer don’t even know what the connections are and how they work how do you know the rules you should include? You are terribly naive about what programmers can know and do!

mcswelll
Reply to  Jim Gorman
July 8, 2021 6:53 pm

You are terribly naive about what programmers can know and do!” I AM a programmer, and have done programming for fifty years (more in the last three decades than early on). On that basis, I suspect I’m a whole lot less naive about what programmers do than you are.

how do you know the rules you should include?” Modern machine learning systems do not include rules; the rule-based systems back in the 1980s did, but those are long gone.

Ok, I know of one rule-based system that’s still used, at least the last I heard about five years ago. It is an English language parser, and is programmed to flag any sentences that don’t follow the rules of a Basic English grammar. It tests conformance of aircraft manuals against a standard, and so it has to be rule-based. I know because I worked on it back in the mid-1980s.

look at the data used”: Agreed, if you want to find problems with a machine learning system, that’s almost always where the problem is.

Jim Gorman
Reply to  mcswelll
July 9, 2021 7:20 am

If you don’t know all the connections between clouds, humidity, temperature, etc. with sufficient data of high enough resolution, then someone had to program “rules” into the system or simply used bad data to obtain some output. In either case, the programmers had to know what was being done in order to write the program in such a manner that it could handle the input as intended.

If the programmers knew work arounds were being done, then they should have also known the output would be nothing more than a GIGO system.

Shanghai Dan
Reply to  mcswelll
July 8, 2021 12:47 pm

Telling you how to learn, is in essence, telling you what to do. If I teach you to only consider data with a positive slope, and discount data with a negative slope, I have told you what to do.

It is much easier to brainwash an “AI” than it is to brainwash a person, because the programmer 100% controls all inputs the “AI” has ever seen, and ever will see. That’s hard to do for 99.999% of all people.

niceguy
Reply to  george1st:)
July 8, 2021 3:20 pm

just machine language experts”

Actually no there are not. Nearly all programmers suck with the languages.

oebele bruinsma
July 7, 2021 11:16 pm

Indeed Feynman would say wrong: “The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models, but highly challenging to detect in the observational record. “

Dave Fair
Reply to  oebele bruinsma
July 8, 2021 10:35 am

I remember when the word “gravitas” was thrown around with abandon when supporting Leftist politicians. Now I notice “robust” being used in every CliSciFi article. Both are unpinned from reality and are mere shibboleths for the ideologically inclined. “I say robust and you must believe what I say.”

Vincent Causey
July 7, 2021 11:49 pm

There is another interesting AI image recognition anecdote involving a network that classified a husky as a wolf, so they analysed the weightings to find out what went wrong. Whereas a human might ascribe certain features a husky possesses, such as the light patches around the eyes, the AI network had trained itself to ascribe a snowy background as the defining characteristic of huskies. Unknown to the researchers – or known on a subconscious level only – the majority of photos of wolves were taken against snowy backgrounds.

This shows very real limitations with AI being used to replace humans. But even if the diagnosis of rainfall is correct, it does not answer the fundamental questions – who is responsible for the temperature increase in the first place. My guess is, if they ever used AI models to try to find anthropogenic figure prints in delta T itself, by looking at historical temperature records, they wouldn’t find it. However, success with the rainfall model might just spur them to try just that.

Komerade Cube
Reply to  Vincent Causey
July 8, 2021 12:44 am

There is a similar story about an army AI system to control anti-tank weapons. When field tested they discovered that they had built an AI system to detect rainy days. It turns out that the majority of pictures used to train the system to recognize enemy tanks were taken in bad weather while the pictures used to recognize our tanks were taken in sunny weather.

Clyde Spencer
Reply to  Komerade Cube
July 8, 2021 8:25 am

These anecdotes suggest that the AI was smarter than the people selecting what data to feed to the program.

mikebartnz
July 7, 2021 11:53 pm

GIGO

Reply to  mikebartnz
July 8, 2021 12:00 am

Precisely.

Eric Vieira
Reply to  Writing Observer
July 8, 2021 2:05 am

I would even add that for someone who completely trusts a self driving car: if something happens, that will be natural selection and the person will not get to win the “Darwin Prize”.

mcswelll
Reply to  Eric Vieira
July 8, 2021 6:41 am

Maybe at present, but as always, the most dangerous part of the automobile is the nut that holds the wheel. Those nuts will never get better, whereas the technology behind self driving cars will almost certainly improve.

John Endicott
Reply to  mcswelll
July 8, 2021 7:17 am

Improve, sure. When you are starting at such a low level of ability, improvement isn’t hard to achieve. Being fit for purpose, on the other hand is a completely different kettle of fish. the technology has a long, long way to go to reach that (if it ever will).

Greg
Reply to  John Endicott
July 8, 2021 1:33 pm

hey , be fair, we have a 200,000 year head start ( depending on where you count the start of our development ).

the average house fly does better in an unknown emergency than a Tesla AI auto pilot.

Greg
Reply to  mcswelll
July 8, 2021 1:25 pm

Those nuts will never get better, whereas the technology behind self driving cars will almost certainly improve.

Sorry, that’s fallacious bullshit.

I’ve been in several life and death situations in 40 years of riding motorbikes, generally rather fast. That is where a few hundred thousand years of evolution kicks in. They say we only use 10% of our brains and that is true when taking the subway. When you have a lethal hazard approaching a 120 mph,it is amazing what natural intelligence can do in 1/100th of a second.

That is when you see the other 90% kick in.

I definitely “got better” after a few miscalculations so the claim the “nut” will never get better is crap, besides I never mistook a jack-knifed semi for a flyover.

I will not bore you with my stories but in times of emergency I have performed super-human feats of mental agility and force which have saved my life half a dozen times. I’m an not an exceptional homo sapiens specimen.

I’ve yet to hear of ONE super human event produced by a Tesla auto-pilot. Usually it is a sub-human WTF event.

The average house fly does better in an emergency than the best AI.

Sorry but mcswell is talking some fundamental crap here.

Laws of Nature
Reply to  mikebartnz
July 8, 2021 5:40 am

Yes!
It does not matter if you are a living climate scientist or his/her AI, if you are blinded by prejudice and sun, clouds or ocean cycles are not allowed to have a significant contribution, you will not find anything but anthropogenic CO2 to be the sole cause for everything.

If you have a hammer ..

Climate believer
July 8, 2021 12:26 am

From the Met office link:

“On a global average, we see that there has been an increase of an extra half-day (so one extra day every two years) where the rainfall was over 20mm/0.78inches.

The change in the length of the longest set of consecutive wet days is only around 0.25 days (so on average, the longest run is 1 day longer every 4th year).

But the increase in the total annual precipitation is over 50mm/2inches since the beginning of the 20th century.”

LOL! sorry but WTF… and your AI will tell us how much of this “catastrophic” change is due to us… yeah…totally legit.

Peta of Newark
Reply to  Climate believer
July 8, 2021 12:51 am

and probably all of that was one weather station they use in the Lake District – my old patch in Cumbria.
The station was already halfway up a mountain but, to make sure it caught more rain, moved it closer to the top.
do you laugh or cry

Also, are they REALLY saying that 20mm per day is ‘extreme’
Prozac would help – if it had any actual active ingredient.
The Placebo Effect lives!

Last not least, what would their ever so clever AI computer make of Unreliable Energy = that the smouldering crate gets switched off at unexpected random intervals. What would it learn from that?
Even the Warmists want that to happen, let’s help them with it huh?
24/7 electricity is a Comfort Blanket – admit it – that is now riddled with fleas and making things very uncomfortable

Rhs
Reply to  Climate believer
July 8, 2021 6:06 am

Wait? They found leap year? An elementary student could have saved them millions of dollars!

John Endicott
Reply to  Climate believer
July 8, 2021 7:14 am

The change in the length of the longest set of consecutive wet days is only around 0.25 days (so on average, the longest run is 1 day longer every 4th year).”

Um, yeah, that’s called Leap Year.

griff
July 8, 2021 1:11 am

Climate change: US-Canada heatwave ‘virtually impossible’ without warming – BBC News

‘Climate change: US-Canada heatwave ‘virtually impossible’ without warming’
Wonder if the machines spotted that?

Climate believer
Reply to  griff
July 8, 2021 2:40 am

From your link:

“They used 21 climate models to estimate how much climate change influenced the heat experienced in the area around the cities of Seattle, Portland and Vancouver.”

The machines created your alarmist headline, programmed by alarmist scientists:

World Weather Attribution (WWA) is an international effort to analyse and communicate the possible influence of climate change on extreme weather events.

Reply to  griff
July 8, 2021 6:54 am

You are so forgetfull or even ignorant, didn’t you read a day or two earlier, that CC has zero to to with these weather patterns ?

Sunsettommy
Reply to  griff
July 8, 2021 7:10 am

How come those very cities that had a huge heatwaves, COOLED since 1990?

You and that terrible modelled fiction thinks global warming is causing a 30 year COOLING trend?

Here is a comment from this article that was posted right here in this blog a week ago:

Lee L,

From the article:
” For example, the U.S. National Climate Assessment found the warmest day of the year over the Northwest actually COOLED between a historic (1901-1960) and a contemporary period (1986-2016)”

As I’ve noted before, it is worth looking up the Pacific Northwest on Berkeley Earth where you can find the RATE of warming or otherwise.
See…
Whats New – Berkeley Earth

… where you will find the following produced from DATA ( not models):

Mean RATE of temperature change(degrees C/century SINCE 1990):
Vancouver, BC -1.62 +/-.60
Seattle -1.27 +/-.67
Portland -1.62 +/- .60

British Columbia -.04 (+/-.32)
Washington state -.54 (+/-.41)
Oregon state -.33 (+/-.29)

Now it has been over 30 years since 1990 which ought to be enough time to see some evidence of an ACCELERATING CLIMATE EMERGENCY developing in the region.

Note: those Minus signs are NOT typos.

LINK

MarkW
Reply to  griff
July 8, 2021 8:42 am

So the fact that similar heat waves occur every couple of years doesn’t matter, because according to the BBC (which is never wrong, on anything), this time it was caused by CO2.

griff, are you being paid to make a fool of yourself? If so, you are underpaid.

Shanghai Dan
Reply to  griff
July 8, 2021 1:15 pm

The science says “no”.

At least, that’s what Dr. Cliff Mass says, probably THE pre-eminent expert on PNW weather patterns:

https://cliffmass.blogspot.com/2021/07/was-global-warming-cause-of-great.html

As Feynman so eloquently summarized: when models and data collide, data wins.

tom0mason
July 8, 2021 1:26 am

Give me control of the AI box for the first 7 iterations and I will show you the Global Warming.
(or anything else you wish from it).

Clyde Spencer
Reply to  tom0mason
July 8, 2021 8:29 am

Only if the AI is Catholic.

Steve Richards
July 8, 2021 1:36 am

Unbelievable! Train a program to take inputs and deliver outputs you have defined. Shock! It delivers what you programmed it to.

gbaikie
July 8, 2021 1:47 am

We are in an interglacial period of an Ice Age. Past interglacial periods were much warmer
than our Holocene period, our interglacial period has been cooler than others and in coolest part of Holocene period.
Our arctic ocean is not ice free and our Sahara desert is not grasslands- global water vapor is low and 1/3 of our land is deserts.

Alan the Brit
Reply to  gbaikie
July 8, 2021 3:46 am

Been reminding folks of this spurious fact for years. Data I’ve read suggests the Holocene is cooler than several previous interglacials by between 2 & 4 degrees C!!!

Reply to  gbaikie
July 8, 2021 8:34 pm

WV has been accurately measured globally by NASA/RSS since Jan 1988 using satellite based instrumentation. It has been reported monthly as Total Precipitable Water (TPW). The measured WV increase has averaged about 1.49% per decade. The measured WV trend has been about 43% steeper than possible from temperature increase alone. 

Eric Vieira
July 8, 2021 1:59 am

A machine doesn’t learn. It is trained to react (according to the programmer’s wishes) to data input, via a “reward” or scoring system set up by the programmer. If the latter is biased, then the output is also biased…

mcswelll
Reply to  Eric Vieira
July 8, 2021 6:47 am

One could argue that humans don’t learn either.

In any case, the reward/ scoring system used to train machine learning systems is conformance of the output data to withheld data, i.e. you randomly split the data into training data, development data (often), and evaluation data. Train the system using the training data, then run the dev data through it to see how well it works; modify the model parameters and iterate again, until the results on the dev data stop improving. Then do a final test on the eval data.

Biases generally happen because the data is inherently biased, not because of the reward/ scoring system.

John Endicott
Reply to  mcswelll
July 8, 2021 7:35 am

It can be both. The reward/scoring system is simply a set of rules. A bias in the rules will result in a bias in the output.

Jim Gorman
Reply to  mcswelll
July 8, 2021 12:47 pm

Have you ever heard of circular reasoning? “modify the model parameters and iterate again”, do you think you might be training it to recognize your biases by “modifying”? You are so naive and have a lot to learn. Imagine doing this same thing on a public bridge design. You end up with an outcome you have unwittingly predetermined!

mcswelll
Reply to  Jim Gorman
July 8, 2021 7:00 pm

The naivety is all yours, thank you. The parameter modification is done by the program in a random way (look up “hill climbing”). (Parameter initialization is done randomly as well, which is why the model is usually run multiple times.) There’s far too much modification going on in the process of training a model for a human to modify the parameters. Instead, the system is left to run on its own until parameter modification over several iterations doesn’t produce a significant improvement, or until the dev set indicates that the model has been overtrained (meaning it’s a good fit to the training data but a bad fit to the withheld dev data).

Jim Gorman
Reply to  mcswelll
July 9, 2021 7:26 am

You keep ignoring the fact that all the data is suspect to begin with. You can randomize parameters all you want, but that doesn’t change the fact that the data you are working with is not fit for the task. Ask yourself why you need to randomize parameters to begin with. What boundaries are used? How do you KNOW the randomization you are doing is appropriate to the real, physical conditions in the real atmosphere. You don’t know this info. The uncertainties in the data we do have are so large not even an AI could sort them out into a reasonable pattern.

Basically, from what I have seen, you have nothing more than a correlation seeking algorithm, not a cause and effect program.

mcswelll
Reply to  Jim Gorman
July 9, 2021 5:58 pm

I disagree: I have not ignored the data; the data is likely *exactly* where problems are–as I said in my first post in this thread: “Biases generally happen because the data is inherently biased not because of the reward/ scoring system.”

TimTheToolMan
July 8, 2021 2:27 am

A neural network based AI with a good training data set can be expected to interpolate quite well. At least that’s the experience with other data sets….

What it wont do well is extrapolate outside of what its seen…like say the atmosphere warmer than its been trained with or water vapor higher that its been trained with, or pretty much any of the expected warmer world scenarios we might expect.

Carlo, Monte
Reply to  TimTheToolMan
July 8, 2021 5:59 am

Extrapolation is Tool Number One in the climastrology toolbox.

Clyde Spencer
Reply to  TimTheToolMan
July 8, 2021 8:32 am

Just like the idea that there are no black swans, because none had ever been seen. It is the surprises in life that make things interesting.

lee
July 8, 2021 2:34 am

“we can aid this learning by imposing climate physics in the algorithm”

wow. who needs climate physics when it does so well.

MarkW
Reply to  lee
July 8, 2021 8:44 am

Would this be the same “climate physics” that is so well understood that we don’t need climate scientists anymore?

Rusty
July 8, 2021 3:13 am

“We are already observing a 1.2C warming compared to pre-industrial levels,” pointed out Dr Sihan Li, a senior research associate at the University of Oxford.

Stopped reading there for obvious reasons. Amazing how such bright people aren’t able to understand the little ice age.

Trying to Play Nice
Reply to  Rusty
July 8, 2021 5:57 am

Not only missing the Little Ice Age, but precision to 1/10th of a degree Celsius when there were no thermometers in most places in pre-industrial times (or now for that matter).

John Endicott
Reply to  Trying to Play Nice
July 8, 2021 7:10 am

And when there were, those thermometer readings often didn’t have that level of precision.

Dave Fair
Reply to  Rusty
July 8, 2021 10:55 am

If you are paid to not see past climatic variations (UN IPCC CliSciFi charter to focus on anthro) you will not see past warming (and cooling) events like Holocene Climate Optimum, 8 k cooling, Miocene warming, Roman warming, Dark Ages cooling, Medieval warming, LIA cooling and Modern warming.

July 8, 2021 3:28 am

This looks like a meta analysis, of somehow combining non statistically significant trials to claim results that reflect something other than random variation. Automating this process with AI only makes it more obscure.

philip
July 8, 2021 4:09 am

WOW! Completely from out of left field, a computer fed highly biased data confirms the self-fulfilling prophesy of the climate alarmist. I’m shook to my rational heels. 😏

bluecat57
July 8, 2021 4:18 am

I can detect BS. What does that make me?

MarkW
Reply to  bluecat57
July 8, 2021 8:45 am

Smarter than your average climate scientist.

Carlo, Monte
July 8, 2021 5:54 am

The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models

How in the world can this be a true statement? The grid meshes are not small enough, TMK.

Clyde Spencer
Reply to  Carlo, Monte
July 8, 2021 8:37 am

It is my understanding that as poor as GCMs are at predicting warming, they are even worse at predicting precipitation, with some models giving results completely different results from other models.

The major problem with GCMs is validating the predictions. How do these AI predictions differ?

MarkW
Reply to  Carlo, Monte
July 8, 2021 8:47 am

The claim that extreme precipitation is robustly projected by climate models, is refuted by real world data. Which is why climate scientists no longer permit real world data to be used in their models. They now validate their models by comparing them to other models.

Greg
July 8, 2021 6:03 am

My point is, when scientists unleash a black box AI on a set of data, they have no way of knowing whether the output of that AI is what they think it is, until they painstakingly rip the AI apart to work out exactly how it formed its conclusions.

Isn’t the whole point of AI that this is unknowable. It’s not just a case of “taking it apart” to see how it works. The process is chaotic and can not be picked apart is a deterministic way.

The Amazon job filter explanation is pure hypothesis. They saw a bias in results they did not find politically convenient, so they created an “explanation” of what was going wrong and hard-wired the software not to make the same mistake again !

The Met Office will similarly program the “basic physics” into the AI and it will dutifully “discover” that it’s all due to CO2. GIGO.

This is the same process they do by tuning climate model to produce no increase when GHG are held constant then claiming they can “prove” AGW by adding GHG back in. They artfully avoid recognising that that result is a direct consequence of the initial training and tweaking, it is NOT model output. It is model input !! GIGO.

mcswelll
Reply to  Greg
July 8, 2021 6:49 am

Re “unknowable”, not necessarily: see my post above about explainable AI.

Greg
July 8, 2021 6:11 am

Somehow we are persuading our women not to pursue technical careers.

Perhaps you need to consider that women actually prefer other occupations. Women are more inclinded to be interested in people and personal interaction, men are more interested in things. That can be hypothesised to be biologically hard-wired via evolution.

When I visited a software development shop in Taipei, there were just as many women as men developing software.

Maybe they are persuading women to enter careers they do not actually prefer to do because their economy is tech based. The speculative argument works both ways.

Flash Chemtrail
July 8, 2021 6:32 am

“Somehow we are persuading our women not to pursue technical careers.”

This is the second load of shit that I have dealt with this morning.

Clyde Spencer
Reply to  Flash Chemtrail
July 8, 2021 8:40 am

It is an assumption that has not been validated.

MarkW
Reply to  Flash Chemtrail
July 8, 2021 8:53 am

Most schools that I am familiar have programs where they practically beg young ladies to go into STEM fields.

Like most liberals, these guys look at the outcome, then assume that something nefarious must be going on if the outcome isn’t the one they want.

Racial balance isn’t what you want, must be the result of racists being in control.
Gender balance isn’t what you want, must be the result of sexists being in control.
People voting for a candidate you hate, proves that the people are being lied to by the media.
And for them, the solution to these problems is always the same.
They need to be in charge so that they can force people to behave in a manner that the liberals approve of.

Recently a CNN political analyst declared that the government needs to start mandating vaccinations. According to him, politicians can’t be so fixated on individual rights.

Climate believer
Reply to  MarkW
July 8, 2021 11:43 am

“Racial balance isn’t what you want, must be the result of racists being in control.”

Off topic, but a video I watched recently on this subject might interest someone, even give the guy a like for just talking sense:

July 8, 2021 6:42 am

It is important to note this male bias in technical jobs is purely a Western cultural issue. When I visited a software development shop in Taipei, there were just as many women as men developing software. The women I have met, in Western IT shops and in that IT shop in Taipei, were just as smart and technically capable as any man. Somehow we are persuading our women not to pursue technical careers

Kinda wandered into left field there.

Pretty sure that this is a logical fallacy: “I went someplace and saw something; therefore I know a fact that is generalizable to the entire world. I can now use that fact to criticize a similar situation in another environment.”

In fact there are even more logical fallacies packed into that paragraph, in nearly every sentence.

The critique of the machine learning man-made global warming nonsense “study” would be more powerful without gratuitous PC logical fallacies.

July 8, 2021 6:51 am

Who will be the machines teacher ?
If the usual suspects will do the job, the machines don’t learn anything. 😀

Scott
July 8, 2021 7:26 am

AI= barely adequate imitation

Curious George
Reply to  Scott
July 8, 2021 7:50 am

But the only way to detect the anthropogenic climate change.

Olen
July 8, 2021 7:43 am

Maybe it is called artificial for a reason. It cannot work outside the box, or think.

Gordon A. Dressler
July 8, 2021 7:46 am

From the above-quoted abstract of the study (it does not merit the title “scientific research”) by Madakumbura, et. al.:
“Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets. Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation.”

Hmmm . . . it is certain that in the past Earth has never entered a glacial period, let alone a true Ice Age, without the associated lower atmospheric temperatures causing a majority of water vapor then in Earth’s atmosphere to globally precipitate out as rainfall or snow . . . the predominate mechanism for the formation of glaciers and ice sheets.

Just consult a pyschrometric chart or table of absolute humidity versus air temperature, based on a 10-12 °C decrease from today’s GLAT of about 12 °C . . . alternatively, just consider how little water vapor air can hold at the temperature of water’s freezing point of 0 °C.

That being the case, it would be a wonderful idea to test this new, assumed-to-be-objective AI (aka “machine learning”) “tool” against the conditions leading to the last glacial interval on Earth. I’m betting 10:1 that, if left unmodified, the current AI “tool” discussed in the above article would also “efficiently generate multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation”, starting about 120,000 years ago.

Greg
Reply to  Gordon A. Dressler
July 8, 2021 1:08 pm

“Using machine learning methods that can account for these uncertainties and capable of identifying the time evolution of the spatial patterns, we find a physically interpretable anthropogenic signal that is detectable in all global observational datasets.

So we used AI to find “the time evolution” ( aka a trend ) in precipitation and the WE attributed that trend to AGW.

Red scarf trick, climate “science” 101.

Andy Pattullo
July 8, 2021 7:50 am

“Machine learning efficiently generates multiple lines of evidence supporting detection of an anthropogenic signal in global extreme precipitation”.

If it says what I believe and therefor must be right.
I believe, therefor I know.
What could be simpler, and completely idiotic at the same time.
Lysenko rises from the grave to save us from success.

TonyG
July 8, 2021 7:56 am

Models are real. Reality is fake.

Sara
July 8, 2021 8:04 am

The Black Box cannot think, create, or imagine. It can only calculate based on mathematical input. Depending on a brilliantly stupid machine system for answers is like asking HAL3000 to open the pod bay doors.

I suggest that all these “brilliant” science persons go back to using slide rules, blackboards, and pencils with erasers, and not be allowed to touch or go near a computer of any kind – not even an IBM with A/B drives that uses floppies – until they learn to use their brains.

We are not doomed. They are. Dependence on AI squelches imagination and creativity.

Sad.

Clyde Spencer
July 8, 2021 8:17 am

When one asks a question of Deep Thought, they need to be very careful how they word it. What one gets as an answer may appear nonsensical, which it may well be. GIGO!

Russ Wood
July 8, 2021 8:19 am

Nothing can go wrong (click) Nothing can go wrong (click) Nothing can go wrong (click) ….

MarkW
July 8, 2021 8:36 am

By carefully choosing the data the AI reviews, AI’s can be trained to find anything.
The fact that it can’t actually explain why it found something, in climate science, is considered an advantage.

Dave Fair
Reply to  MarkW
July 8, 2021 11:08 am

Funny, that. What is it that the AI uses to determine that whatever data it is fed to it is caused by AGW? Any variation of a given climatic metric has many possible causes, and one must input the assumptions that they are caused by Man. Otherwise, how would the AI determine it is AGW? It appears they use GCMs as input because, as they assert, GCM results are “robust” and observations don’t support the AI conclusions.

July 8, 2021 10:22 am

Somehow we are persuading our women not to pursue technical careers”

Well, we’ve been telling them for decades that they won’t be paid as much as men in the fields, will be discriminated against in school and for job openings and won’t receive promotions. All of these things are likely false (the “wage gap” has been debunked so many times it’s amazing to me that the myth persists) but when women have been force fed these beliefs for so long, is it any wonder that many of them decide “why bother” if they think they’re going to be constantly undermined if they enter a science or engineering field anyway?

Makes much more sense to coast through college with a degree that ends in “studies” where they’ll be welcomed with open arms right?

ResourceGuy
July 8, 2021 10:25 am

So CAGW comes down to the realization it is a new form of ransomware with everyone as victims.

Smart Rock
July 8, 2021 10:57 am

Look at the difference between the title and the first sentence of the abstract. This is from a “scientific” paper, not the usual semi-literate stuff we see from journalists or press-release writers.

“Anthropogenic influence on extreme precipitation over global land areas seen in multiple observational datasets”

“The intensification of extreme precipitation under anthropogenic forcing is robustly projected by global climate models, but highly challenging to detect in the observational record

Although my mental processes are still a bit foggy, thanks to the covid-19 I had in April last year (so, my apologies if I misread it), the authors appear to me to be juxtaposing two diametrically opposed statements. In other words, the title of the paper is making a claim that the authors themselves cannot substantiate.

Good grief. This looks to be a new low point for climate science (and that is saying something!). It’s doublethink in action, live in 2021!!

============

Eric makes very valid comments about how they should unpack their AI program to see what it’s actually doing. I offer this little anecdote from the AI community:

I was asked to look at some mineral exploration targets in an area of the Canadian Shield, that had been generated by an AI company. I had previously looked at the area using only my general knowledge of geology and mineral exploration based on 40-odd years of practical experience, and I was not impressed. I thought (actually, I knew with 97% certainty) that the AI had relied too much on a single parameter (which was IMHO the one parameter they should NOT have used). I asked about what databases the AI used and how did it select targets. I was told (this is me paraphrasing it) that they didn’t know how the AI made its selection, and that they weren’t supposed to know, because that was the point of AI.

Clyde Spencer
Reply to  Smart Rock
July 8, 2021 6:57 pm

“Ignorance is knowledge”

Robert of Texas
July 8, 2021 11:29 am

Garbage in-Garbage out… This does not take a genius to figure out. They are feeding in manipulated data and the AI is simply performing pattern recognition of what they are doing to the data.

Randy Stubbings
July 8, 2021 12:11 pm

From the article’s abstract: “Previous attempts to detect human influence on extreme precipitation have not incorporated model uncertainty, and have been limited to specific regions and observational datasets.” What are the odds that a model trained to detect human influence on extreme precipitation will detect human influence on extreme precipitation?

SC_
July 8, 2021 1:16 pm

Watch this clip about AI and see if these academics have more work to do.



H.R.
July 8, 2021 1:54 pm

“I have detected Anthropogenic Climate Change.

I’m sorry, Dave, but I can’t let you burn any more fossil fuel.”

Is this program named HAL, by any chance?

Michael S. Kelly
July 9, 2021 6:15 pm

I believe this article https://www.antipope.org/charlie/blog-static/2021/03/lying-to-the-ghost-in-the-mach.html has bearing on the subject. Evidently, it’s easy to lie to AI enough that its subsequent output is worthless.

%d bloggers like this:
Verified by MonsterInsights