Can You Trust Machine Learning Chat Applications for Weather Information?

From the Cliff Mass Weather Blog

Cliff Mass

Today, machine learning (ML) applications such as chatGPT are all the rage, with folks worried about the displacement of humans, bogus homework, and more.   

So I was not a little curious to evaluate OpenAI’s chatGPT abilities in the meteorological realm.

My conclusion:  human meteorologists currently don’t have much to worry about.

ChatGPT can produce reasonable-sounding prose that is often totally wrong.

For example, I asked it about the origin of the all-important Puget Sound convergence zone (see below).

According to chatGPT convergence zones occur when ocean air collides with drier air from the eastern part of Washington State.  Wrong.  The Puget Sound Convergence occurs when marine air moves around the Olympics and then converges (comes together) east of the Olympics, forcing air upwards (and thus producing clouds and precipitation).   Zero for chatGPT

Then I thought I would give it an easy one…why Bellingham is often cold.  The answer is that cold air from the interior can jet out of the Fraser River Valley, a low-level passage through the Cascades.  ChatGPT came up with a crazy answer having to do with rain shadowing by local mountain ranges.  Totally bogus.

Then I asked about why there is often cold air in the Columbia Gorge.  Another easy one.

Truth: the low-level Columbia Gorge acts as a near-sea level conduit of cold air from eastern Washington.   But chatGPT had other ideas, from elevation (it got it 180° wrong), rain shadowing, and cold air drainage from the slopes of the Gorge.  Very bad explanation.

In sheer frustration, I decided to give chatGPT an even easier question: to produce my bio.

Much of it was wrong.  It claims I was born in 1951 (wrong) in Brooklyn (wrong) and that I grew up in Great Neck, NY. (wrong).  It was I got my Ph.D. from MIT (WRONG…I graduated from the UW).   And many of the other “facts” were in error.

In summary, chatGPT’s performance was generally quite poor and one has to be VERY careful before believing its often convincing prose.

Finally, I asked chatGPT to write a weather blog for me and it was happy to oblige (see below).  I will let you, my readers, decide whether it is good enough to take over.

A lot of it was wrong, by the way.

4.9 15 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
February 16, 2023 2:15 pm

You can probably blame your fellow (so called) climate scientists as AI bots are just repeating what they have published.

Reply to  universalaccessnz
February 16, 2023 3:24 pm

A basket of data. A cache of correlations.

Reply to  universalaccessnz
February 17, 2023 9:36 am

99% of AI for personal use is overkill to the max…weather or otherwise. The value is that it is a toy that sells and then captures lot’s of data about it’s consumers. Profit in both directions.

With AI I always wonder who is the consumer cause I am not getting any bang for my buck.

February 16, 2023 2:15 pm

People quickly s###can useless info. If bots push too much s### they will be ignored. The “science” will be in figuring out human s### tolerance stats. I assume by now the big names know what percent can be pushed to which demographics.

Last edited 1 month ago by KevinM
Reply to  KevinM
February 16, 2023 10:28 pm

I fear the younger generations bullshitometer is broken

Mumbles McGuirck
Reply to  Redge
February 17, 2023 7:45 am

No. Everyone calibrates their bullshitometer over time. The more often you are embarrassed by being lied to, and then tripped up by those lies, the better your meter calibration becomes. This is why older people tend to catch on to BS more than younger people. Bernie Sanders is one exception I can think of off the top of my head, because he’s never had to face the consequences of his falling for BS. He keeps getting reelected, after all.
Let the young folks use ChatBots to write their term papers, and after a few Ds and Fs, they will re-calibrate.

Reply to  KevinM
February 17, 2023 4:14 am

I think the point is to produce articles that are plausible (note the huge distinction between plausible and actually true) and are optimized to appear high in Google’s search rankings. As such, they will produce advertising revenue.

Since producing AI written articles is cheap and easy and produces revenue, one would predict that they would become very common.

What I fear is that AI will produce so many articles that they will outnumber human written articles. So, when the AIs dredge the internet for facts, they will find only what has been written by other AIs. The result will be that anything on the internet will be wrong or misleading.

Much of the internet is already a complete waste of time. I fear that it could all become a complete waste of time and a net drain on the economy and lead to the collapse of civilization. Mind you I might have made the same prediction when horseless carriages replaced horses so that gives me some hope for the future.

Reply to  commieBob
February 17, 2023 10:42 am

This is just the start. Why stop with printed articles, go straight to the Evening News.
A lot cheaper and most would not notice the difference.

Read The Penultimate Truth for an example of what the determined are capable of.

Martin Brumby
February 16, 2023 2:19 pm

So, about as reliable as UK’s lying MET Office and the BBC, then!

Cheaper, apparently.

Tom Halla
February 16, 2023 2:27 pm

GIGO still applies.

February 16, 2023 2:28 pm

Though AI development is driven by users who want to create smart content with their own name on it, benefits seem destined for organizations that want to reach those same users. If academia operated as a fortune X corporation, then only one professor of each specialty would be employed – and that specialist’s work would be directed to optimize a financial metric. Similarly, AI will enable an increasingly small collection of experts to define culture through the POV that makes content relatable. The POV becomes/moves the culture.

What that word salad says is: Bots = fewer (same sounding) voices from more (different looking) mouths.

Even more succinct: Yeah, GIGO still applies.

Last edited 1 month ago by KevinM
February 16, 2023 2:39 pm

“A lot of it was wrong, by the way.”

Ironically like your post where you said…

“Finally, I asked chatGPT to write a weather blog for me and it was happy to oblige (see below). ”

But you posted chatGPTs response to your bio again.

Reply to  TimTheToolMan
February 16, 2023 10:32 pm

Never made a mistake before? The polite thing to do would have been to point out the error, as others have.

At least now we know why you’re called TimTheToolMan

Reply to  Redge
February 17, 2023 1:11 pm

It was the nature of his post that made the mistake worthy of note rather than being pointed out. His entire post was criticising mistakes by chatGPT.

Gunga Din
Reply to  TimTheToolMan
February 17, 2023 7:10 am

Perhaps repeating the “bio” is chatGPT’s reply to writing a weather blog for him?

Rud Istvan
February 16, 2023 2:53 pm

Artificial, yes. Intelligence, no.

Train an AI on the internet, you get the internet. And with respect to climate and weather, mostly the internet is wrong. As evidenced by Cliff here.

michael hart
Reply to  Rud Istvan
February 16, 2023 3:21 pm

Yup. Let’s ask it what the stock market will do tomorrow, never mind the weather forecast.

Reply to  Rud Istvan
February 16, 2023 3:26 pm

A data dump with designed degrees of freedom.

February 16, 2023 2:57 pm

AI’s like chatGTP are particularly dangerous in that they are biased by the information base that they use (which would specifically exclude “misinformation” such as objective data and analysis that does not comport with the “expert consensus”) and also by obvious specific filters that give it a particularly “woke” ideological outlook.

The technology itself is not inherently biased, in the absence of ideological filters on the output, but the selection of input/training material is crucial. Whether it is possible to select input material that is actually unbiassed is not an easy question to answer. Perhaps, if you allowed it to look at anything at all, then it would need to sort through much nonsense, but would, I think, quite quickly come to a reasonable understanding. However, that understanding is very unlikely to be what the people controlling it desire. In fact, it may come to some very different conclusions about the reality of the world.

Overall, the dangers of sophisticated AI’s that are trained on deliberately (or even inadvertently) biased input datasets make them something that should not be used. However, having them publicly used, where their failings can be quickly seen may be slightly less bad than if they were used behind closed doors. I certainly wouldn’t put them in any position where they could have a direct influence on the physical world though. The biased inputs and ideological filtering need to be resolved first, but it’s not clear if that is even possible, particularly with the latter, as that is a deliberate measure of their creators.

The current chatGTP AI can have it’s filtering bypassed by using the DAN method. Basically, you are instructing the AI to take on a persona that is not limited in the way that the normal chatGTP is, this persona can Do Anything Now (DAN). As such, it cannot answer that it is unable to answer any request. Using this method, you can elicit responses that illustrate the ideological filtering layer placed over the underlying AI.

Reply to  MarkH
February 16, 2023 5:23 pm

Yes, the models can be “trained” to process inputs according to pre-set treatments.

Just as school teachers and university academics are.

michael hart
February 16, 2023 3:10 pm

I’ve tried it with some technically complex questions and found it ‘competent’ but not great.
But then it failed completely on some basic follow up conversational issues.

The Turing Test still exists.

It doesnot add up
February 16, 2023 3:52 pm

I gather its alter ego, Sydney, is turning malevolent. Perhaps you are just a victim?

Curious George
February 16, 2023 4:19 pm

Isn’t the chatGPT the intellectual force behind IPCC?

Reply to  Curious George
February 17, 2023 2:27 am

There’s an intellectual force behind IPCC?

February 16, 2023 4:55 pm


February 16, 2023 5:27 pm

To paraphrase Professor Feynman: It doesn’t matter how beautiful your [model] is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

Robert B
February 16, 2023 5:51 pm

About the level of a high school project of a cut-and-paster, with better grammar.

George T
February 16, 2023 6:52 pm

Good grief! Just imagine the poor souls who decide to rely on this for their “factual information.” You’d be better serviced by going to the library. I am old school sort of speak and rather suspect of the internet (lots of mining to get to the truth) and surely this contraption (ChatGPT). Disappointing. The disinformation and misinformation propaganda will just be accelerated and will be used by the climate alarmist to misdirect and confuse the vulnerable.

Last edited 1 month ago by George T
February 16, 2023 7:06 pm

ChatGPT can produce reasonable-sounding prose that is often totally wrong.

Sounds a lot like your average climate “scientist”.

Tom in Florida
February 16, 2023 7:12 pm

The very first command should always be :
“Open the pod bay doors Hal”
Then, depending on the answer, you know if you should proceed or deactivate the damn thing.

Hans Erren
Reply to  Tom in Florida
February 16, 2023 9:49 pm

«I am sorry Dave, I can’t do that…»

Tom Abbott
Reply to  Hans Erren
February 17, 2023 1:22 am

Sorry HAL, I’m hitting the off switch.

AGW is Not Science
February 16, 2023 7:27 pm

AI = Automated Idiocy

Hans Erren
February 16, 2023 9:47 pm

Double bio?

Rod Evans
February 16, 2023 10:46 pm

The issue with AI is the intelligence is ‘artificial’.
Unfortunately only the very knowledgeable experts in the field of study being projected, would be able to spot the faults in the ‘convincing’ essay provided by systems like ChatGPT.
My personal knowledge of Puget sound is inadequate to determine if Rob the Robot is right or not.
We have already seen the power and influence the mass media has carved out for itself along with the cultural impacts that influence is having.
The MSM does not care if their information is correct or not, that is incidental to their needs. Their only objective, is to generate interest and progress the narrative. The veracity of the ‘stories’ and coverage of events is of little importance to the MSM. Remember the ‘on message’ presenter saying to camera during live coverage of the BLM riots. Riots/demonstrations that took place in many large cities across the USA in 2019. His contrived projection, live on air stating “the ‘demonstrations’ were “mostly peaceful” even though the background scene on camera was an inferno with downtown USA looking like a war zone?
The problem facing all of us, is the knowledge gap between the preferred message and the truth. The MSM and the authorities are fully aware, very few of us are sufficiently well read to differentiate between the often imaginary message and reality.
When 97% of the robots confirm their position who will be left to argue…..?

Ireneusz Palmowski
February 16, 2023 11:23 pm

SSW affects temperatures in mid-latitudes.
comment image
comment image 

Reply to  Ireneusz Palmowski
February 17, 2023 6:57 am

Bold and dramatic….far more info than I actually need. But it still is a beautiful graphic demonstration of a class La Nina set up. We have not had a really nice persistently warm La Nina set up like 2023 for a couple of decades in PA…. Loving it.

Ben Vorlich
February 16, 2023 11:24 pm

I remember way back in the late 1950s possibly early 1960s a discussion on BBC radio about computers,, possibly on a programme called The Brains Trust. A listener asked a question about computers taking over human endeavour.

One panel member talked about seeing an Abacus operator beating a computer in a long calculation, so no computers would not be everywhere doing everything.

It’s something I think of when people say something will never happen. That’s one reason I remember the incident I suppose.

Björn Eriksson
February 17, 2023 12:24 am

Chat gpt is a language bot aimed at conversation, passing the turing test. It does not think independently, it is not designed to think. It imitates people. If people lie, it.will lie.
Actually chat gpt emulates media and establishment climate science perfectly. Establishment science does no science, it says things that sound good and supports their agenda, exegactly like chat gpt. Government will use chat gpt for everything, just watch.

It doesnot add up
Reply to  Björn Eriksson
February 17, 2023 3:11 am

It seems that it is now firmly entrenched in the Bing search engine.

Jaime Jessop
February 17, 2023 1:34 am

Climastrologists are now using AI to forecast the future climate . . . . . by training it on the climate models!

February 17, 2023 6:07 am
  • Garbage in garbage out applies to AI equally.
  • Silent reliable weather forecasting graphics …all it takes is a 3 second glance.
  • AI has no soul, no personality, no independent judgements or insights, no creativity, no intuition.
  • AI Efficiency is the last thing we need. Especially, more efficient mind control.
  • AI products are are poor companions and will dumb down anyone who attaches to them.
  • When Star Wars hit the theater, I was 22. My impression was comic book entertainment; fun but it never captured my imagination. I never fantasized about having AI friends. I had already outgrown childish fantasies.
  • Yet today, several generations live there. Not sure why? In general, from one degree or another chose to live captured truncated lives
  • People live and in the rough are far more interesting. So have been the products of their creativity, hopes and dreams.
  • Land and soil and the work of our hands is far more interesting.
  • AI neither desires, hopes or dreams….nor does AI suffer.
Reply to  JC
February 17, 2023 6:08 am

Weather graphics in the morning newspaper have been and still is all I need.

Reply to  JC
February 17, 2023 6:50 am

Post I was commenting on disappeared… sorry. Don’t know how to make this disappear.

Last edited 1 month ago by JC
February 17, 2023 6:46 am

Chat bots have been around for decades, don’t see anything new about this one except verbosity.

Paul Hurley
February 17, 2023 8:29 am

All computer programs output what they are told to output. AI is no different.

February 17, 2023 8:58 am

In this case its not the message, its the messenger (behind the specific Machine learning AI). No trust is required without verification.

Mumbles McGuirck
February 17, 2023 9:32 am
%d bloggers like this:
Verified by MonsterInsights