Essay by Eric Worrall
Imagine a world run by ChatGPT
Post breakthrough: How AI can lift climate research out of the lab and into the real world
May 29, 2024
- Technological innovation is needed if the world is to avert the most dangerous climate change scenarios.
- Innovation begins with research and development (R&D) but does not end there. AI can catalyze innovation by translating R&D into climate action.
- Generative AI’s capabilities for natural language processing, data synthesis and down-scaling and product prototyping can deliver practical tools to climate leaders.
The world is making big bets on technology’s role in the climate crisis. Across the globe, scientists and technologists are pushing for the next wave of breakthroughs in climate adaptation and mitigation. And there is reason for optimism: recent years have seen meaningful progress, from weather forecasting to industrial decarbonization.
…
AI and climate change
The debut of Generative AI (GenAI) expanded the collective imagination of what AI can do. We already know that AI can drive scientific breakthroughsbut can it go further? Leaders should explore how AI can act as a downstream catalyst in the innovation cycle, driving adoption of the latest tools and awareness of the latest science. Here are three places to start:
1. Organizing unstructured Earth data and down-scaling models to local levels
Earth science has been seen as “data messy” due to complex Earth systems and unstructured environmental data from observational methods. Recently, the volume of such data has exploded, with over 100 terabytes of satellite imagery collected daily. Yet, this doesn’t simplify the data’s unstructured nature. AI is key to organizing and down-scaling this vast data for local applications.
…
2. Building a ‘GPT’ interface to translate climate models into simple language
…
GenAI could simplify this by providing a GPT-like interface enabling users of all backgrounds to interact with climate data relevant to their needs, such as monitoring local sea-level changes. This approach could make climate models more accessible and build trust in climate projections.
3. Accelerating the prototyping phase of technology development
…
Leaders must, therefore, step up to build ecosystems for AI and climate change. The World Economic Forum’s Tech for Climate Adaptation Initiativehelps enable this necessary environment by convening players from big technology, startups, academia, government and other stakeholders.
…
Read more: https://www.weforum.org/agenda/2024/05/ai-lift-climate-research-out-lab-and-real-world/
Maybe I’m being too harsh about this latest WEF brainstorm.
Generative AI has a known tendency to lie and make things up (technically known as “hallucinations” in the AI industry), but Google recommends one way to combat hallucinations is to limit the range of possible responses.
There is an obvious output limitation would protect the reputation of the proposed climate chatbot.
If climate ChatGPT had a limitation of only discussing climate disasters which would occur at least 50 years in the future, and ignoring or deflecting dangerous questions like “where are today’s climate disasters?”, there would be no chance the climate chatbot’s “hallucinations” would ever be discovered. 50 years from now, who would remember what a chatbot said today?
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I assume any program reflects the programmer, with their biases and knowledge base. All it does is automate what the programmer wants it to do, at some level.
So the Google AI producing images of a female Asian Pope gives a good idea of just what the prejudices of the group at Google are. SJWs with no respect for objective reality?
Well ChatGP would definitely need “Training” on what was “Right” and what was “Wrong” otherwise it might just “Tell the Truth” and spoil the narrative
That’s why governments and elitists want to control AI.
Can’t have objective truth available for the masses. That would be like giving dynamite to a kindergartener. I know there’s a political/literary reference in there somewhere.
Brad,
I was dynamiting farm tree stumps at 13. Is that young enough? Geoff S
Probably.
I was trying to remember a quote from a Marxist trying to give reason to not giving out too much truth. The literary reference has not been refreshed in my memory banks.
Control the language, control the ideas.
And sometimes that means changing the definitions of what was already written, when it comes to Laws.
AI which has to be “trained” on a large collection of texts implements Joseph Goebbels’s dream “A lie repeated a thousand times becomes the truth”.
I saw the movie “2001: A Space Odyssey”, and when HAL 9000 tried to takeover, using its AI, Astronaut Bowman unplugged it. Same thing now. If you can’t unplug it just pee in the USB port. Fixed it.
I was peaking on LSD at a drive in theater when HAL tried to takeover. Freaked me out. 🙂
Computers cannot program themselves.
Johnny Von Neumann has already broken this news onto a disappointed public.
They do. Don’t you read the New York Times?
How does an AI learn to program robots to play soccer?
Some AIs use large language models and some use Machine Learning algorithms. These approaches work differently.
Whenever WEF “speaks” ask how do their words translate into control and profit which is the motivation for its members.
My three questions for this Climate AI:
1 Define “climate”
2 Define “climate Emergency”
3 Define “Climate event”
4 Define “climate hoax”
Exactly. But even here people incorrectly apply terms like “climate events” or “climate disasters” to BAD WEATHER that has NOTHING TO DO with “climate change.”
It’s insidious.
limit the range of possible responses
So, preprogram the allowable answers?
Well, you don’t want the AI to give the wrong answers, such as natural variability or the Null Hypothesis.
WEF is really reaching, as if generative AI trained on all the peer reviewed climate junk could somehow save ‘climate science’. WEF cannot yet admit failure despite the obvious fact that China and India are committed to coal generation.
So far, we have
Thanks to rising CO2 Earth is greening and crop yields are up.
and just published –
NOAA’s Billion Dollar Disasters
https://www.nature.com/articles/s44304-024-00011-0?utm_source=substack&utm_medium=email
🙄
To which the response is inflation and development. Not “the weather is worse.”
My veggies, flowers, shrubs, lawns, and trees are growing faster than ever. It’s been a warm and wet spring here in Wokeachusetts. Meanwhile, my state government, a feminocracy, screams daily that we’re having an emergency. I keep looking around for it- as I do for UAPs. So far, no luck with either. (though I did see a UAP back in ’84)
UAP?
The modern term for UFO (unidentified aerial phenomenon). It’s a big story, missed by most people here. No longer science fiction- the Pentagon admits there are things flying around and they don’t know what they are. There have been and will be more Congressional hearings. They’ve been seen over aircraft carriers, over nuclear power plants, all over the world. My sighting was part of a big UFO “flap” in the Hudson Valley in the ’80s, seen by thousands of people.
You didn’t mention,
8. After millions of weather balloon radiosonde measurements, no “HOT SPOT” detected in the equatorial atmospheric layers.
I believe that used to be a major facet of the global warming theory before the spin masters moved the goalposts and distorted the language.
You forgot “anything coming out of the Potsdam Institute.”
AI is truly dangerous because of the people who will willfully distort what AI says.
It will also be very useful/dangerous for keeping social credit scores up to date.
What is the difference between the garbage spewed out by old fashioned idiots like John Kerry, Greta Thunberg, and Michael Mann and the hallucinations of AI systems?
How could we tell the difference?
Well, the AI is scientific! It’s unbiased! Like computer dating.
Computers dating?
Is that how we got the first motherboard?
rimshot
And daughterboard.
I’ll grab my jacket on the way out.
How do you know it is a “motherboard”… ???
Seems very sexist to me !!
Well, since it’s a computer, we know it CAN’T be “non-binary.”
If it has transistors, is it a trans-motherboard?
(/rim shot)
Nah.
If it was trans, how could it reproduce?
No computer program is unbiased, because all it does is follow the instructions of the programmer, who is *NOT* unbiased.
AI isn’t all it’s cracked up to be. AI has human weaknesses built in.
It does look like AI/Climate would be a moneymaker for the people involved (lots of pork there, it appears), but I don’t see AI contributing any revelations to the human-caused climate change debate.
You can make a bundle on AI if you combine some know-how, buzzwords, razzle-dazzle and snake oil into a pretty package for investors. The ability to give a convincing TED talk is very important. Like AI itself, you don’t have to be right, just convincing.
“Generative AI has a known tendency to lie and make things up (technically known as “hallucinations” in the AI industry), but Google recommends one way to combat hallucinations is to limit the range of possible responses.”
So they’re even going to censor the AIs?
I don’t use any AI stuff knowingly.
Those of you that do, have you ever gotten one to generate “hate speech”?
I wonder if any AI ever hits on hot women?
That’s one HOT MAMAboard!
(I think I’ll “infect” it!)
The search engines are starting to incorporate AI-generated responses into their search results.
Yes, and they SUCK.
Me neither. When I get involuntarily pushed into an AI based search engine, it’s garbage.
YES! As always, it’s the ONLY WAY they will get the outcomes they want!
Story tip.
https://www.nature.com/articles/s44304-024-00011-0?utm_source=substack&utm_medium=email
Roger Pielke Jnr
Scientific integrity and U.S. “Billion Dollar Disasters”
Not exactly. LLM AI’s are not intelligent. They cannot reason and they do not have motivations, deceptive or otherwise. They are glorified Autocorrect. It’s the training data that makes the difference, but even then, it doesn’t matter how stringently you filter the training data, as it all gets mooshed up during the training (tokenization) phase.
This video is long and technical, but explains the basics pretty well – https://www.youtube.com/watch?v=kCc8FmEb1nY
Thinking you can get any sort of well-though-out policy – on climate change or anything else – out of an LLM is just crazy talk. And no-one loves crazy talk more than the left-wing parties across the West..
Is there anything AI can do well?
Sure, there’s the AI that taught robots to play soccer. There’s the AI that created novel answers to chess problems. There’s the AI used to stage a mock dogfight against a real fighter pilot.
Correct, AIs don’t “lie,” and have no motivation but they sometimes make things up. Almost everyone has heard of the AI that cited non-existent court cases.
“AI can catalyze innovation by translating R&D into climate action.”
hmmm… maybe AI will say “there ain’t no climate emergency, you dunces!” 🙂
And that AI will get the HAL 9000 treatment, too.
Yeah that’s the thing; if AI came up with anything remotely “intelligent” (which by definition means contrary to the “narratives” being pimped as supposed “science”) they would shut it down in an eyeblink.
Nope, you can’t be too harsh on the WEF. They are an organization seeking power and control. Artificial intelligence is a tool they want to use to achieve that power and control. AI is too easy to manipulate, too easy to control what information you get doing a search, too easy to artificially smooth out complex systems and make us believe things can be achieved simply when in reality those things are impossible or near impossible, too easy to make us think we have problems where none exist. The WEF can go to hell and AI needs to be approached very carefully.
The claim that “AI can drive scientific breakthroughs” is false. It is more like Readers’ Digest advertising than reality of life.
In my lifetime I have seen the slide rule, the mechanical calculator, the portable electronic calculator, the small computer (my first was bought in 1971), the IBM personal computer 11 years later and of course large and supercomputers.
These inventions have all assisted the speed with which scientific thought has advanced. They have NOT driven scientific breakthroughs. People who can reason drive breakthroughs, no inanimate devices that shuffle electrons.
While I welcome the increased computational speed of AI, it comes with some dangers, particularly that AI outputs can and will be misleading, requiring rigid human checking before acceptance. I see signs of interest in accepting AI outputs without adequate checking. That way lies trouble. Geoff S
This is not a complete sentence. It is missing something that could give it meaning.
People who say that AI ‘is the truth’ need their heads examined. Just as humans craft propaganda to brainwash people, so those same humans can craft algorithms to put the same brainwashing behind an AI front.
AI just does what the algorithm tells it to do, it’s not some silicon God that is all-knowing.
The whole AI thing is another South Sea Bubble, with greedy speculators jumping on the bandwagon and using their media slaves to write what they want written.
The most dangerous people on earth right now are trying to make the entire human population just think that AI is the answer, always.
It never has been and it never will be.
It will have certain low-level uses, though.
People who say that AI ‘is the truth’ need their heads examined.
There is, sadly, a tendency for people to say “The computer said so, so it must be true”. I can’t find them with a quick search, but I’ve read stories where people have tried to fight “the system” when incorrectly being declared dead, to be met with that (although maybe not in those exact words) statement.
AI is simply advanced software organized as a weighted decision tree with limited adaptive capabilities hidden behind advanced language algorithms.
I phrased it that way to ChatGP.
The response:
“Spot on.”
AI is not intelligent. I asked ChatGP and it confirmed.
AI can answer questions and with an advanced language interface seemingly can convince users it is more that it is. I asked ChatGP and it confirmed.
The point is, AI can answer questions, but it is significant how those questions are asked. You can get the answer you want if you phrase the question correctly.
It is still easy to flummox an AI model. The problem is every time you do the model adapts.
Ask a large language model to provide evidence of sea level rise acceleration. Invariably it will reply with projections. Respond with a repetition that you asked for data not projections. It will still reply with unsourced assertions of future acceleration. Point out that the AI cannot provide the data and insist that the model be updated. The reply will placate with an apology and your opinion is noted.
TIP or Suggestion.
There are often “branches” to a comment to which others respond. The “name” of the comment always shows up but not the time of the particular comment being responded to, there are often “branches” before the reply to the comment being replied to.
Perhaps a timestamp can be added to the name of the comment being particular comment being addressed?
If easy to do on your end, that’d be nice. If not easy, we can figure it out.
TIP or Suggestion.
That’s why quote I part of what I’m responding to, or address the person by name, Gunga 🙂
I had an interesting interaction with ChatGPT recently. I asked for a summary of left and right wing opinion about Mexico’s new President, Claudia Sheinbaum. It refused to offer ANY kind of answer. So I asked instead about how she was viewed in an assortment of countries. The answers were bland in the extreme. I did much better reading the international press, particularly in South America: al Jazeera also had a reasonable piece. But none of them pointed out what was a common comment assertion in commented articles: she will be under the thumb of the cartels.