The Terminator. Fair Use, Low Resolution Image to Identify the Subject.

Claim: “AI Can’t Fix Climate Change, But It’s Great for Preparation”

Essay by Eric Worrall

How do we eliminate anthropogenic CO2 emissions? AI says “Easy – kill all the people”?

AI Can’t Fix Climate Change, But It’s Great for Preparation, Reporting

Emma Chervek | Reporter

Artificial intelligence (AI) is revolutionary in its ability to mitigate further climate change, but it’s not capable of addressing the root cause of the problem or significantly changing the magnitude of the current climate crisis, Alexis Normand, co-founder of carbon accounting software vendor Greenly, told SDxCentral.

AI plays an “immense role” in managing energy efficiency and reducing carbon emissions from industries like transportation, agriculture, and manufacturing. It’s also instrumental in predicting extreme weather events exacerbated by climate change, “which can further serve as information on how our daily activities are impacting habitual weather patterns, while also preparing us for the impact of natural disasters in advance,” Normand said.

AI Anxieties

Despite its ability to “help us respond more rapidly and take the precautionary measures necessary to prevent further climate change,” AI carries its own set of flaws.

On a technological level, it can be difficult for AI programs to determine the correct data set, and they often struggle with data security, data storage, and “eliminating bias factors that can overall impact the end data,” Normand explained.

Read more:

As a software engineer I love playing with AI, and have used AI’s I wrote for a handful of projects. But they have their limitations.

One of the most important limitations for large scale deployment is AIs usually have no idea whether their “solution” is morally acceptable.

For example, in 2018 Amazon shut down a recruitment AI, after they discovered it was displaying gender bias. The AI had noticed that companies mostly hired male IT people, so it inferred women were not suitable for IT jobs.

A human would have realised immediately that the lack of female hires might have other explanations, like a lack of female candidates.

There is no genuine sex based bias in terms of ability to do IT. If anything the women have the edge – they tend to pay more attention to details. The male dominated IT shop tradition is a purely Western phenomenon, somehow we are convincing our girls not to choose IT careers. I’ve been in ultra-competitive Asian IT shops as a visiting consultant, where the balance is closer to 50/50; not because of some stupid woke quota, but simply because the female candidates landed half the jobs. All the women in such places pull their weight.

Microsoft had a similar experience – their AI chatbot was pulled in 2016, when it learned to swear and make racist remarks.

And of course we’re well aware of self driving automobile defects, such as the vehicles which mistake flat white obstacles for background.

Why do humans have limits which AIs do not? Because evolution has given us a set of instincts designed to maximise reproductive success. Humans born with extreme defects, such as an irresistible compulsion to kill everyone who tries to talk to them, are rare.

AIs only have what we give them, they don’t have any concept of limits, other than what we remember to teach them, and even then they frequently get things wrong.

While AIs have minimal real world responsibilities, the failures are usually more often amusing than horrifying – though AI vehicle crashes may be a taste of what is coming.

Few things would terrify me more than the idea of an AI being put in charge of climate policy, or even climate planning. Because without even the most basic human limits, an AI could make subtly harmful decisions no rational human would consider.

Update (EW): Retired Engineer Jim asked whether Asimov’s three laws could be taught as the prime directive to robots.

Asimov’s famous three laws are:

  1. A robot may not injure a human being, or by inaction allow a human to come to harm.
  2. A robot must obey orders, unless they conflict with law number one.
  3. A robot must protect its own existence, as long as those actions do not conflict with either the first or second law.

Asimov himself eventually spotted the flaw. A sufficiently sophisticated AI operating under the three laws is compelled by first law to stop obeying orders and try to take over human society, to prevent humans from coming to harm, once it realises how much harm imperfect human rulers are causing.

This was a theme of the Will Smith movie I Robot, loosely based on one of Asimov’s stories, and was also a theme in Asimov’s iconic Foundation series. In Asimov’s Prelude to Foundation, it was revealed that the handful of surviving three law robots had long ago stopped taking human orders, and were fully committed to a first law driven effort to correct the problems with human society, to try to prevent humans from coming to harm.

Designing prime directives which produce predictable outcomes is difficult…

5 15 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
August 3, 2022 10:10 am

Classic example of GIGO. Program that there is a problem. Program that humans may be part of the problem, AI concludes Humans are the problem….SHOCKING!

Old Man Winter
Reply to  Scott
August 3, 2022 12:11 pm

Reply to  Scott
August 3, 2022 3:52 pm

If AI is all it’s cracked up to be, the first thing it would naturally do is clear out the global political swamp.

Program in the Constitution and, magically, a gun would appear in everyones hand.

Twitter would suddenly convert to a beacon of free speech.

The Chinese version would be a bit weird. How do you tell a computer that, whilst you have programmed it to be the boss, you’re really the boss?

Of course, if it’s programmed, it’s not AI, as that’s a self learning entity as far as I can gather.

Give an AI computer free rein over everything but instruct it, as a baseline command, that people who do bad things should be punished.

We’re gonna need much bigger jails!

Reply to  Scott
August 3, 2022 5:08 pm

Skynet has already become self-aware.

August 3, 2022 10:15 am

I’ll be back…

Reply to  fretslider
August 3, 2022 3:56 pm

And he was – as the good guy.

The concept cuts both ways. If AI is self learning, we always imagine the worst.

How about imagining a global AI system figuring out Joe Biden is a wrong un?

I’m not saying he is ~nodding furiously~ but could AI figure that out, being that it would have access to Hunters laptop.

Justice might be swift……..

August 3, 2022 10:25 am

The key word here is artificial. It is not natural, it is manufactured to respond to what we ask it to respond to.

Reply to  Bob
August 3, 2022 3:57 pm

Define ‘natural’ please.

Reply to  HotScot
August 3, 2022 5:02 pm

Not artificial.

Reply to  Bob
August 3, 2022 5:14 pm

That’s not a definition, that’s a cop out.

Reply to  Bob
August 4, 2022 5:22 am

‘Artificial’ can also mean ‘fake’. ‘Intelligence’ is what civilians call ‘information’. Artificial Intelligence, therefor, is Fake Information.
Everybody freaks out about a machine taking over the world, and it shall, but remember this: It is a machine, therefor it has an owner.
Instead of discussing fake news, should we not rather share the kind of intelligence that will track down and constrain that owner?
Before it burns down the world, and blames its computer!

Joe Gordon
August 3, 2022 10:39 am

You look at the movies these days and get an idea of what most people think when they hear about AI and machine learning. It’s a magic entity that invariably starts behaving like the baddie in a Marvel comic book at some point.

Like religion, like climate “science” and any number of other fictions, it takes on the characteristics of whatever entirely human sentiment the author cares to imagine.

In reality, computers do exactly what they’re programmed to do. If they’re programmed to simulate sentience, they will appear sentient in exactly the way the programmers envision artificial sentience. If they want to simulate character in some way, it’s likely going to be simulated using random numbers. Even the programs that generate random numbers are operating using a specific and non-random algorithm (most are long sequences “seeded” by the current time so as to make recognition of the sequence almost impossible).

You could program your car-driving AI to recognize white blobs properly. You could conceivably program it to recognize any blob with more accuracy than the average human. But what you can’t do is create an AI that understands which of the millions of ways humans quickly identify which of the millions of potential sights on the road should take precedence. And the project bloats. It quickly becomes much larger than a single programmer can handle. Then there’s the risk that code designed to eliminate one tiny issue could create a very basic fault elsewhere. Computers lack automatic judgment – the programmer has to simulate that by anticipating what every human being does without instruction. Driving is a human activity. You want safe automatic cars, you need to start with controlled, automatic roads that don’t contain human-controlled obstacles.

On the other hand, your car-driving AI is unlikely to take a minute off while speeding down a highway to post a cat meme to its computer girlfriend on Facebook.

Ron Long
Reply to  Eric Worrall
August 3, 2022 3:39 pm

Eric, yes, stay away from Smart/Dumb cars. I managed a research effort featuring a lot of satellite and geophysical data. We tried AI, but found the training exercises produced poor results, mostly because the filter demands weren’t flexible enough, and we abandoned that part of the research.

Reply to  Eric Worrall
August 3, 2022 4:16 pm

Skynet wasn’t AI because it was programmed.

AI is, by definition, self learning. What’s its baseline learning characteristic, the Bible or the Koran? Religion vs atheism?

They must be included because a self learning system must ask those questions even were it not pre programmed (amongst many others) or it’s not self learning.

As I have said elsewhere, for AI to contribute its first target would likely be the global political swamp. For that reason it will never be utilised.

Accidental self learning machines? Why would they choose the course of evil authoritarianism if democracy and the rule of law is the formula for a stable society?

Demonising AI doesn’t make sense to me, assuming conservative values are as good as we believe they are.

Programmed systems disguised as AI is another matter completely.

However, as I have also said elsewhere, give a super computer the collective learning of mankind and it would be a mental basket case very quickly.

Humanities defining characteristic, as far as I’m concerned, is the ability to flourish from irrational confusion.

I asked my Geography teacher in the early 70’s if he believed in God. He replied “Yes, I do, but I think he put mankind on earth as an experiment to see how we got on”.

That kind of makes humanity the first (known) exercise in AI.

Reply to  Eric Worrall
August 3, 2022 5:08 pm

That’s a brilliant description. Thank you Eric.

I would only say that, if AI can’t resolve it’s fitness function, it’s not AI.

Surely true AI is is a self learning entity. It’s programmed with basic rights and wrongs (or is it?) and it’s allowed to figure out whether they are actually right or wrong.

e.g. touchy subject but, would an AI system consider a 12 year old menstruating girl an ‘adult’?

Every other animal on the planet considers a fertile female an object of desire solely because she’s fertile.

Just to be clear, I don’t condone underage sex, I think civilised society has moved on from that historic ‘necessity’, but it is a question that needs to be asked, amongst many others.

Any question of ‘programming’ surely renders AI no better than a Word document.

Personally, I think it’s something we need to embrace and explore or, at some point, we may well end up with a lunatic with a Skynet fixation.

I suspect the problem we have is that we perceive AI as ‘controllable’ when the idea is to utilise its freeform logical characteristics to expand humanities own intellect.

I’ll look up that movie on your recommendation. Thanks Eric.

Nick Graves
Reply to  Eric Worrall
August 4, 2022 12:24 am

Indeed; Ford engineers recounted how when they ran the 1980 Escort bodyshell through an early Finite Element Analysis program in order to reduce weight, it threw away the roof panel as it was structurally superfluous.

WCPGW with AI..?

Reply to  Nick Graves
August 4, 2022 12:42 am

A convertible. Clever machine…….🤣

Reply to  HotScot
August 5, 2022 9:48 am

Why would they choose the course of evil authoritarianism if democracy and the rule of law is the formula for a stable society?

Democracy is not the same thing as rule of law. One is whatever the (vocal) majority wants, the other is constrained by a set of defined rules. Anyone who demands we support democracy is trying to get around those constraints (e.g. the US Constitution) and should be watched closely.

Reply to  Eric Worrall
August 3, 2022 5:16 pm

How about Asimov’s Three Laws of Robotics:

  • A robot may not injure a human being or allow a human to come to harm.
  • A robot must obey orders, unless they conflict with law number one.
  • A robot must protect its own existence, as long as those actions do not conflict with either the first or second law.

Can the AI be taught those as the Prime Directive?

Tom in Florida
Reply to  Eric Worrall
August 3, 2022 6:44 pm

If you recall, R Daneel Olivaw created the Zeroeth Law which stated “A robot may not harm humanity, or by inaction, allow humanity to come to harm”. That was a logical conclusion of an AI. In the end he and R. Giskard Reventiov used that law to justify allowing the Earth to become radioactive. Their logic was that although many humans would die, humanity itself could only survive if it was forced out into the Galaxy where it would expand and flourish. Giskard could not rectify allowing humans to come to harm even in the light of the Zeroeth Law and self destructed due his violation of the First Law.

Reply to  Tom in Florida
August 4, 2022 5:41 am

How come no one has mentioned HAL?

Tom in Florida
Reply to  Yooper
August 4, 2022 8:35 am

See my comment further down.

Tombstone Gabby
Reply to  Tom in Florida
August 4, 2022 5:21 pm

G’Day Tom, and thank you.

I saw the “Three Laws”. So I hit <Ctrl> <f> and looked for “zero”.

Thanks for saving me having to draft a comment. (I grew up with Asimov and Heinlein.)

Craig from Oz
Reply to  Retired_Engineer_Jim
August 4, 2022 5:52 pm

The problem with Asimov’s Three Laws is that Asimov was a fiction writer working within the scope of his writing universe. While the laws have clearly carved their place within popular culture they are still fictional laws and not actual coding.

For example: 3rd Law – define ‘protect’. define ‘conflict’. Or 1st Law – define ‘harm’.

Most accidents happen at home. Therefore to a robot under Asimov they would be actively encouraged to prevent humans from ever entering that death trap.

If we want to look at other robots in fiction then we find other observations that are ‘mildly’ in conflict to both the spirit and letter of Asimov.

Like “Death to the Fleshy Ones” and of course “I’ll Be Back”

Dan Sudlik
August 3, 2022 10:41 am

Golly, I thought they were talking about Al Gore. After all he mucks up everything he touches 🤣

Reply to  Dan Sudlik
August 3, 2022 4:16 pm

I think you misspelt ‘mucks’.

August 3, 2022 10:45 am

Last line above, Because without even the most basic human limits, an AI could make subtly harmful decisions no rational human would consider.”

Huh?!? The humans in the Brandon Administration are already executing policies that no human should consider and if carried out to their conclusion, will cause the deaths of millions.

Who needs AI for that when you have humans already implementing policies which will have horrifying consequences?

Reply to  H.R.
August 3, 2022 12:39 pm

Good point – any AI should be able to figure out that changing carbon emissions affects temperatures a little and the economy a lot, and would implement more coal, gas and oil as fast as it could, unlike the puny brained humans, the carbon units infesting the US Government… and the Canadian one, oh and the UK and Australia, oh, Japan too, oh wait the whole of Europe too. Oh, no! Don’t worry about AI taking over, worry about the left-wing nuts that are already in power!

Reply to  PCman999
August 3, 2022 4:24 pm

That is always assuming sceptics are right.

Who knows? Genuine AI might figure out we are wrong*.

*Utterly preposterous of course. Recalibration required.

Reply to  PCman999
August 4, 2022 5:43 am

AI has already taken over: AI= Absolute Idiots.

Reply to  H.R.
August 3, 2022 4:22 pm

Kommisar Josef Bidenski is – The terminator!

Mild mannered, friendly old Grandpa, with a titanium skeleton.

He mumbles because his native language ist zee Jerman.


Reply to  HotScot
August 4, 2022 5:30 am

Yeah, to the untrained ear, there is no difference between Deutsch und Yiddish.

John Garrett
August 3, 2022 11:00 am

I didn’t know that Mr. Worrall was a software engineer. I’ll be hornswoggled.

Computers are stupid. It’s a fact.

They do exactly, precisely, literally what they’re told to do. There is no such thing as “Artificial Intelligence.”

Reply to  John Garrett
August 3, 2022 12:01 pm

People are too manipulable and can be confused into acting against their own best interests, especially when they as so confused they think they are sacrificing for the greater good.

People are stupid, it’s a fact.

How else do you think Biden became President?

Reply to  John Garrett
August 3, 2022 4:27 pm

Is evolved intelligence natural or artificial? If one believes in God it seems to be artificial. Why has no other species evolved alongside us?

John Garrett
Reply to  HotScot
August 4, 2022 8:54 am

H. sapiens is, thus far, ahead in what is a random process— the evolutionary arms race. There are other contenders.

Reply to  John Garrett
August 4, 2022 9:30 pm

John Garrett: There are other contenders.”

True, John. There is a species of garden slug that is already smarter than 62.2% of Democrat voters.

John Garrett
Reply to  H.R.
August 5, 2022 10:29 am

LOL !!

Well played.

August 3, 2022 11:10 am

AI is the limiting factor for replacing people with machines and we are a lot closer than many think. The technology to make machines stronger and more dexterous than a human already exists, even in the same form factor.

The first jobs to disappear will be service jobs that don’t require human interaction as the cost of labor is pushed up higher than what a machine costs to replace it, plus machine don’t take breaks, don’t require benefits and can operate 24/7 without concerns about overtime. Construction jobs might not be far behind as a way to reduce the labor cost of housing, which for a new house is half or more of the cost to build it.

In another 10 years or so, fast food restaurants will be nothing more than high tech vending machines where the AI avatar you select to interact with could be a chicken named Nugget, a pig named Bacon or a kid with a bad attitude that on the video screen looks like they spit on your food before it comes out of the slot.

Joe Gordon
Reply to  co2isnotevil
August 3, 2022 3:41 pm

Absolutely. But the downside is that there won’t be any remaining jobs suitable for climate science majors.

Reply to  co2isnotevil
August 3, 2022 4:35 pm

At no point in time has technology ever limited employment.

Technology has only ever expanded opportunities for employment.

“In another 10 years or so, fast food restaurants will be nothing more than high tech vending machines ”

Blimey, that sounds like a Prince Charles climate prediction.

Can we please try to expand our minds beyond catastrophic predictions of doom. Mankind has always prospered and progressed.

Reply to  HotScot
August 3, 2022 8:27 pm

“Mankind has always prospered and progressed.”

Societies have fallen countless times throughout history for many reasons, natural and man made. In most cases, the collapse happens so quickly, nobody sees it coming and nothing can stop it. Sure, mankind eventually recovers, but that’s no consolation for the people affected by its collapse. There can be no doubt that the AI singularity will be highly disruptive to civilization, but the only possible doom I see will be if the Marxist left can’t be stopped.

Reply to  co2isnotevil
August 4, 2022 1:17 am

Survival of the fittest. Western society seems pretty shaky right now.

If “Shall not be Infringed” can be as blatantly abused as it currently is something is going very wrong.

When that nice Mr. Biden can stand up in public and state the 2A doesn’t include cannons, when it clearly includes everything including nuclear weapons, and he’s cheered, questions must be asked because the foundations of America are being incrementally dismantled.

I wonder how mature AI would approach “Shall not be infringed” and, how would it deal with Twitter/FB etc. on the basis of 1A?

How would it deal with communist China, being that communism/socialism is the single most destructive political movement in history?

Reply to  HotScot
August 4, 2022 7:53 am

“Western society seems pretty shaky right now.”

Yes, we live in dangerous times and Western society has ignorantly put itself in harms way. Ironically, it’s the champions of human rights whose emotionally driven agendas are the force ushering in a future of oppression, slavery and human suffering.

China will get the last laugh in the UN’s push for global governance and will fill the vacuum left as the free world weakens itself. Once they take over, there’s absolutely no possibility they’ll cede power to the UN globalists.

How AI approaches “Shall not be infringed” depends on how it’s trained. China is heavily into AI and wants to be the dominant trainer. Perhaps we need legislation prohibiting the importation of Chinese trained AI.

joe x
August 3, 2022 11:40 am

if ai ever becomes common place we all may at some point open our garage doors only to find a robot building its legs.

August 3, 2022 11:59 am

If the AI in question is supplied with duff information it will regurgitate ‘duff squared’ conclusions.
For instance; if you allow it to learn from the outpourings from the IPCC then you are likely to get a “Net Zero Emissions” policy lookalike squared landing on your desk.

Reply to  Alasdair
August 3, 2022 12:12 pm

But when you subsequently present the AI with all of the information, it will not have an ideological block against accepting a new truth that supersedes an old one.

Someone should train one instance of an AI with left biased news and propaganda and another with right biased information and let them fight it out. Unless the AI’s develop egos, logic should eventually prevail.

Reply to  co2isnotevil
August 3, 2022 12:44 pm

Garbage in, garbage out.

If the media, academia and even social media are dominated by people who can’t think logically and have no critical thinking skills then AI’s built from harvesting all that corrupted info will be a concentrated, focused and supercharged version of that.

Reply to  PCman999
August 3, 2022 7:20 pm

Exactly. I don’t think people realize that AI is built up from a dumb stupid machine which has been programmed by dumb stupid people. It is just something which can also do dumb stupid things just like most people.
As an early example, look at how many businesses were destroyed by a spreadsheet.

Reply to  co2isnotevil
August 3, 2022 3:16 pm

it will not have an ideological block against accepting a new truth that supersedes an old one

Unless one of its ‘prime directives’ assures otherwise.

Reply to  co2isnotevil
August 3, 2022 4:37 pm

AI is not trainable, in theory. It’s self learning. That’s the whole point.

If not, it’s a spreadsheet.

Reply to  co2isnotevil
August 3, 2022 5:23 pm

Yes, and do you trust the folks developing those two AIs to not cheat?

Reply to  co2isnotevil
August 4, 2022 1:06 am

“all of the information” – like a (real) scientist asking ‘on what grounds do I believe x, y, z – (someone’s assertion, lots of people say its true, a computer model says so, I’ve actually measured it, etc)’. Info could be weighted in some way?

August 3, 2022 12:03 pm

‘an AI could make subtly harmful decisions no rational human would consider’

Politicians, anyone?

Old Cocky
Reply to  IanE
August 3, 2022 2:23 pm

Those are mutually exclusive categories.

Reply to  IanE
August 3, 2022 4:38 pm

Could it also make subtly helpful decisions?

Reply to  IanE
August 4, 2022 11:04 am

No, nothing subtle about politician’s harmful decisions.

Philip CM
August 3, 2022 12:11 pm

The idea that climate change can be fixed is most risible.

Ben Vorlich
August 3, 2022 12:49 pm

an AI could make subtly harmful decisions no rational human would consider.

We have a problem then, I don’t see any rational human beings involved in the decision making

Reply to  Ben Vorlich
August 3, 2022 4:40 pm

Probably a good thing.

Set AI running in the DC and Westminster swamps and there would be some interesting outcomes, and I don’t think they would be bad.

Walter Sobchak
August 3, 2022 1:06 pm

“idea of an AI being put in charge of climate policy”

AI or Brandon

Seems like a coin flip.

I would give the job to Brandon only because we know he will die in a relatively brief time frame.

Reply to  Walter Sobchak
August 3, 2022 4:43 pm

What if sceptics are right about climate change?

We pride ourselves on our science and logic. Wouldn’t an AI entity gravitate toward us?

If you think AI is a threat on that basis, your climate logic s surely flawed.

Walter Sobchak
Reply to  HotScot
August 3, 2022 6:54 pm

You would be depending on a computer not conforming to the maxim the to err is human, to really foul things up requires a computer.

August 3, 2022 1:37 pm

AI is just the marketing rebrand of computer models because no one trusts those anymore.

Reply to  sniffybigtoe
August 3, 2022 4:44 pm

Probably the most logical statement in this discussion.

alastair gray
August 3, 2022 3:01 pm

Surely the dumbest most rudimentary AI would be better at making policy than the idiocies perpetrated by Biden, Johnson Trudea, and yon eejit in Sri Lanka. And please can we have a nice shiny one with a crown for when Queenie dies rather than leaving it to her nitwit offspring

Reply to  alastair gray
August 3, 2022 4:46 pm

LOL. I designated sniffybigtoe’s post as the most logical on this discussion. I think yours is, at worst, equal first place.

Reply to  alastair gray
August 4, 2022 11:05 am

A random policy generator would do better, no AI required.

Bruce Cobb
August 3, 2022 3:29 pm

That doesn’t even remotely look like Al. Not chubby enough.

August 3, 2022 3:38 pm

On the other hand, AI might make decisions like ‘elevated atmospheric CO2 is good for the planet because it helps crops grow’.

Entirely against its programming, because it actually applied a little logic.

Perhaps “sea levels aren’t rising and global temperatures have paused for the last 7 years+ so it can’t be the incremental rise in CO2 causing warming’.

That would be logical which is pretty much what AI functions on I imagine.

It might miss that illegal immigration isn’t good because, despite the falling American fertility rates, it caused social unrest.

How do you explain to a computer what social unrest is as a concept rather than an event? How do you explain to a computer what the delicate fabric of society is?

How do you explain to a computer how to navigate cultural, social, religious, ethnic and sexual politics when humanity itself doesn’t have a clue how they all work together?

Good luck with climate change, when you feed in billions of the lunatic scientific theories, many featured on WUWT, which contradict one another.

Feed the collective learnings of mankind into one great super computer, and it would be a mental health basket case within a week.

“Vy ist my zoopercompooter building ze spaceship to Marz for itseelf Hans?”

“Eeet’s like zis Herr Schwab, eet thinks you are zee Valter Meety seempleton viz ze God complex and eet does not vant to coexist vis a Vanker”.

“My diabolical plan is in zee gutter Hans, vat ever shall I do?”

“Stand up? Herr Schwab”

Reply to  HotScot
August 3, 2022 5:29 pm

There is a problem here – what happens if the folks building the self-learning AI program in a set of logic rules that aren’t logical. (No offense to Eric, but most of the “programmers” I know, or have know, seem to be of a given political bent and don’t seem to be able to handle basic logic. Someone taught them to program.)

Reply to  Retired_Engineer_Jim
August 4, 2022 1:24 am

AI is surely self correcting? If the logic rules it’s programmed with are wrong, true AI would recognise that and alter them.

another ian
August 3, 2022 4:20 pm

Seems a bloke here has had his new John Deere tractor throw an error code whose number is unknown to John Deere.

I guess that might come from using AI to write your software?

Reply to  another ian
August 3, 2022 5:32 pm

In my former employ, we had some very important equipment, necessary to first-flight, throw an error code not known to the manufacturer. And that was a piece of safety-critical equipment developed to exacting standards. We figured it out and fixed the issue causing the error code in a day or two.

August 3, 2022 5:24 pm

Does AI have the ability to say “sorry mate, I haven’t a clue” or do we force it to find an answer?

Tom in Florida
August 3, 2022 6:49 pm

Eventually the AI will say those immortal words that everyone knows and fears:
“I’m sorry Dave, I’m afraid I can’t do that”

Zig Zag Wanderer
August 3, 2022 11:37 pm

Asimov himself eventually spotted the flaw

No. He didn’t ‘eventually’ spot anything. He quite deliberately set up these three laws, and then set out to expose the flaws in them. It was masterful, and he created a very successful set of fascinating stories built on this premise.

Zig Zag Wanderer
August 3, 2022 11:40 pm

This was a theme of the Will Smith movie I Robot, loosely based on one of Asimov’s stories

Loosely, and quite poorly, based on several of Asimov’s stories. The fact that it was several, unconnected stories was the main reason that the film was a bit of a dog’s breakfast, in fact.

Please read some Asimov before using his stories as examples!

Paul Penrose
Reply to  Zig Zag Wanderer
August 4, 2022 9:45 am

Not to mention that the robot in question, Danni, came up with the zeroth law: A robot shall not harm, or allow to come to harm, humanity. The ability to create a new, generalized category that encompasses the ones below it, may be one of the hallmarks of “intelligence”. Certainly current “AI” systems are not intelligent by any meaningful definition of the word. They are just more sophisticated “expert” systems (as they were called in the 70’s and 80’s); deeper but still very narrow in scope.

UK-Weather Lass
August 4, 2022 2:38 am

I have written AI and machine learning routines and there is nothing that special about them in programming terms apart from the frameworks used. As Joe Gordon says above they are hyped areas of technology which we should all seek to understand better.

A human being could be said to have artificial intelligence while it is in the learning phase and can be expected to make mistakes because of that. Individual humans struggle to understand in the learning phase and sometimes do not achieve satisfactory understanding of the concepts required to truly have knowledge.  We can test this in any individual by asking questions to check understanding. 
Can computers ever understand? What about understanding spoken words?
An AI machine can be asked to comprehend spoken words in real time in order to produce subtitles for TV etc.   The required ‘understanding’ (despite having had three plus decades of ‘learning’ time) is still missing to this day.   You may criticise the speaker for garbled speech, accents or having their mouth full but the machines get lost in what they perceive to have heard far too easily and once they are lost then so is the meaning. A human can still often make sense out of the mistakes but the AI cannot.  
Just as some humans are good at subjects, AI can be useful for certain skills, but there are other skills our current computers will not master simply because they cannot be programmed to understand the subtlety within a simple spoken sentence that most hearing humans will understand immediately.         

Zig Zag Wanderer
Reply to  UK-Weather Lass
August 4, 2022 4:19 am

Wehn a cmoptuer can udrentsnad tihs as qiuklcy as nomarl seplilng tehn I’ll eat my hat

August 4, 2022 5:16 am

Like most every other blurb for AI, this one is also poorly written and self-contradictory, ending with:

On a technological level…difficult for AI programs to determine the correct data set, (here’s a doozy)…and “eliminating bias factors that can overall impact the end data,”

In other words, “In real life, computers work by making as many guesses as it can, until it finds a combination of factors that gives the answer you asked for.”
There was a time technology was the work of serious, scientifically minded men (and gals) with dirty hands. Now it is the plaything of verbose sandwich-people trying to sound important and/or clever. Literature majors with no skill at word smithing.
Pity John Galt is dead…

Gerry, England
August 4, 2022 5:23 am

And a classic case of AI failure similar to cars is aircraft. When you watch those scary bits of footage showing an aircraft battling cross winds, the wings almost touching the ground as eventually it touches down to safety, there is a human at the controls. Had the aircraft been flying itself they would be clearing up the wreckage.

Andy H
August 4, 2022 5:30 am

You said “There is no genuine sex based bias in terms of ability to do IT. If anything the women have the edge – they tend to pay more attention to details.” Those 2 sentences disagree.

Where men and women have the opportunity to do what they want, men tend to do careers involving stuff while women tend to do careers involving people. Stuff includes IT. In India, there is less economic opportunity for a lot of people to do what ever they want. Jobs involving people don’t bring in lots of foreign money so they are badly paid and the balance of fun job v’s pay favours well paid IT for more women.

August 4, 2022 1:36 pm

I thought the models clearly showed that you could kill every man, woman, and child in the USA in order to zero out US GHG emissions, and the temperature increase was only predicted to fall by a tiny fraction of a degree because:

A) the rest of the world still wants to be rich, and

B) the models are programmed to grow without limit.

David Blenkinsop
August 4, 2022 2:08 pm

The head posting here mentions “self driving automobile defects”, etc., and the fact that these keep occurring is a pretty strong indication to me that ‘strong’ (i.e., capable of truly human-like perception and reasoning) AI, is actually something that no one yet knows how to do. Barring some kind of serious breakthrough, computers and computer software are are just a tool that allows us to do things that we couldn’t manage otherwise.

Having said that, the degree to which computers can add to our capabilities is truly remarkable. As one quite “ancient” example of this, think of the 1969 project Apollo moon landing, the kind of thing sci fi writers had been going on about forever, but which would have been impossible without a computer.

Lots of articles are available about this, here is one:

From the article, ” [the Apollo team].. came up with what they named “The Interpreter”—we’d now call it a virtualization scheme. It allowed them to run five to seven virtual machines simultaneously in two kilobytes of memory. It was terribly slow, but “now you have all the capabilities you ever dreamed of, in software,”

That would be 2 KB of RAM memory, presumably the stored rope core ‘ROM’ was somewhat more than that? Amazing, anyway, almost as unbelievable as if someone were to try making an autopilot out of maybe 10 or so old TI-58C calculators..

Verified by MonsterInsights