Climate Realism has spent years scrutinizing climate science claims, thus it was heartening to read Scott Waldman’s recent E&E News article, titled “Is climate change a threat? It depends, says Elon Musk’s AI chatbot.” The article highlights how Grok, the AI chatbot developed by xAI, is presenting the debate about the causes and consequences of climate change in a balanced fashion. In doing so, Grok is breaking from the pack of conformist AI models like ChatGPT and Google’s Gemini, which parrot the so-called “scientific consensus” that humans are causing dangerous climate change. Grok’s approach, as Waldman notes, is a deliberate shift by xAI under Elon Musk’s direction to make Grok “politically neutral” and amplify minority, climate change skeptical, views to counter mainstream bias. The public should applaud xAI for this bold, scientifically grounded, move.
Waldman points out that when asked if climate change is an urgent threat, Grok acknowledges NOAA and NASA data but also highlights perspectives from skeptics, like Bjørn Lomborg, who argue adaptation is more cost-effective than drastic emissions cuts. Grok even questions the reliability of climate models, noting, “[s]ome models show gradual changes over centuries, not imminent collapse, giving time for technological solutions (e.g., carbon capture).”
This nuanced response is a breath of fresh air in a world where AI models often regurgitate alarmist narratives without scrutiny. By presenting both sides, Grok embodies the skepticism that has historically driven scientific progress and is a return to basic scientific principles.
The E&E News piece quotes climate scientist Andrew Dessler, who laments Grok’s inclusion of “well-trodden denier talking points.” But Dessler misses the point: science isn’t about silencing dissent; it’s about testing hypotheses against reality. History is littered with examples of “consensus” science being dead wrong, and Grok’s willingness to challenge the climate orthodoxy is a nod to this truth.
Take the case of plate tectonics, ridiculed for decades until overwhelming evidence forced a paradigm shift in the 1960s. Or consider the eugenics movement, endorsed by leading scientists in the early 20th century, now universally condemned as pseudoscience. Even in medicine, the germ theory of disease was dismissed by the medical establishment until Louis Pasteur and others proved it. These examples show that consensus can be a barrier to truth, making Grok’s acknowledgment of the legitimacy of skeptical critiques of the mainstream climate crisis narrative valuable.
One reason for caution and skepticism concerning “consensus” claims about climate change is the record of failed climate disaster predictions. Waldman’s article also underscores Grok’s point that “extreme rhetoric on both sides muddies the water.” This is spot-on. For decades, alarmists have peddled apocalyptic predictions that haven’t materialized, eroding trust in climate science. Grok’s refusal to buy into the “we’re all gonna die” narrative is commendable, especially when you look at the track record of failed forecasts.
- Polar Bear Extinction: In 2008, Al Gore and others claimed polar bears were on the brink of extinction due to melting Arctic ice. Yet, as documented Climate Realism, polar bear populations have remained stable or grown, with no evidence of climate-driven collapse.
- Snow-Free Winters: Climate models in the early 2000s predicted snow would become a “thing of the past” in places like the UK. Instead, Watts Up With That has chronicled repeated heavy snowfalls, debunking this claim.
- Catastrophic Sea Level Rise: In 1989, the UN predicted entire nations would be submerged by 2000 due to rising seas. Climate at a Glance shows sea levels rising at a steady, manageable rate of about 1-3 mm per year, with no acceleration tied to CO2 emissions.
- Hurricane Apocalypse: After Hurricane Katrina in 2005, Al Gore and other climate alarmists linked global warming to more frequent and intense hurricanes, yet that never materialized and Watts Up With That cites NOAA data showing no significant trend in hurricane frequency or intensity over the past century. In fact, the United States recently experienced the fewest number of hurricane strikes in any eight year period in recorded history, from 2009 through 2017.
These falsified predictions highlight why Grok’s caution about “imminent collapse” is justified. The E&E News article notes Grok’s point that “wealthier nations can mitigate impacts through infrastructure (e.g., Dutch sea walls),” which aligns with real-world evidence of human resilience. The Netherlands, for instance, has thrived below sea level for centuries thanks to engineering, not panic.
Waldman raises concerns about Grok’s potential to “sow doubt” about climate science, quoting an AI engineer who claims Grok produces “misleading claims” 10% of the time. But this critique assumes the IPCC and mainstream models are accurate or infallible, which history and data disprove. Grok’s inclusion of X posts, which Waldman calls “laden with climate denial,” is a feature, not a bug. Platforms like X allow raw, unfiltered perspectives that challenge the sanitized narratives of legacy media. By tapping into this, Grok ensures a broader view, even if it ruffles some feathers.
The article also mentions Musk’s complex stance—funding carbon removal contests while supporting Trump, who has called climate change an expensive “hoax.” This duality reflects Grok’s balanced output: it cites data from NOAA and NASA but doesn’t accept it uncritically as dispositive or bow to dogma. That’s the kind of AI we need—one that doesn’t just echo the loudest voices but digs for truth, even when it’s inconvenient.
In a world where AI is increasingly shaping public perception, Grok’s commitment to questioning the climate narrative is a win for science and reason. As Waldman’s article inadvertently shows, Grok isn’t afraid to challenge the status quo, and that’s something we here at Climate Realism can get behind.

Anthony Watts is a senior fellow for environment and climate at The Heartland Institute. Watts has been in the weather business both in front of, and behind the camera as an on-air television meteorologist since 1978, and currently does daily radio forecasts. He has created weather graphics presentation systems for television, specialized weather instrumentation, as well as co-authored peer-reviewed papers on climate issues. He operates the most viewed website in the world on climate, the award-winning website wattsupwiththat.com.
Originally posted at ClimateREALISM
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I have also noticed this about Grok. Let’s hope AI will finally add much needed objectivity to the climate debate.
Grok doesn’t budge a millimeter on the trans nonsense.
fat farking chance.
Me, “Grok, when will I die?”
Grok, “I’ll let you know when it happens.”
Me, “Gemini, when will I die?”
Gemini, “After you kill Grok.”
And there’s no AI bias?
Deus ex machina.
There’s no reason to blindly believe Grok’s statements (or those of any AI). We don’t know their learning algorithms or their inbuilt biases.
That caveat will remain true even if (when) AI becomes capable of some sort of independent rational thought. When that eventuality comes to fore, AI will be as subject to its own preferred narrative as any human.
There’s nothing magic or objective about AI.
Or intelligent. it should not even be called AI. it should be called Quick Search.
I think they do an amazing job of emulating human reasoning, which is certainly intelligent. They can also be creative which is as independent as humans who are also a product of their training.
Here is creative: https://www.cfact.org/2025/03/04/ai-emulates-abstract-thinking-about-kipling-lady-gaga-and-the-rolling-stones/
“There’s nothing magic or objective about AI.”
I agree. Since AI seems to be trained mostly on information on the internet and in peer-reviewed papers and we are told that 80-85% of the information on the internet is garbage, something like 80-85% of the medical research papers cannot be duplicated and other areas of science are no better, how that hell can anyone ever trust anything AI produces as being correct? AI is most likely only good at following the current consensus and/or the leanings/beliefs of its programmers.
Yes, yet Grok is very good at pattern recognition and logic. It is this ability to take massive information and logically compare, and note logical fails and success that is quite astounding.
Skepticism would be appropriate for an AI named after a word invented by Robert Heinlein in “Stranger in a Strange Land”. Heinlein could be deliberately provocative on received wisdom.
I must have read that book a dozen times when I was a teen. I like it when Jubal asked his Fair Witness Anne to tell him what color a building was that they could see out the window. She said, “It’s white on this side,” (or something to that effect). That was to illustrate she took nothing for granted, like assuming the other side was white, too.
I wonder if “climate scientists” should employ that same level of skepticism. (Answer: YES!)
Grock acknowledges NOAA and NASA data? The only NOAA and NASA data showing what one may perceive as a problem is their reported global temperature numbers are increasing slowly. There is no other data from either of these Administrations showing any adverse consequences of a small temperature increase, if there is actually one. Storms, fires, floods, droughts and whatnot show no changes in recent decades. So what is the data that Grock is recognizing?
I think this came from perplexity AI –
I like AI when it supports what I already know. It doesn’t try to be gratuitously offensive, and apologises profusely for feeding me misinformation, and thanks me for pointing out its ignorance and gullibility.
What’s not to like?
Considering they can be led by the nose to the conclusions you want, I take no stock in them.
Jeff, you might appreciate this (how to make water hotter by adding ice) –
No leading by the nose here, all the AI’s own work. I did complain about the physics involved, however, and AI had second thoughts –
So, a “miscalculation”! I asked why it provided misinformation without labelling it as such –
Ah well, no doubt the use of AI by the military, lawyers, health professionals, computer programmers and so on, will be perfect. Maybe a good place to test AI would be in self-driving motor vehicles – school buses, large heavily laden trucks on highways – that sort of thing.
What could possibly go wrong?
/sarc off
It is still perfectly true: Garbage in- garbage out.
Grok can only answer, based on what is in the accessible data bases. The data bases are not complete, nor free of error, nor of bias. In response to queries, Grok does return errors and typos. Beyond a sophomore physics level, it is necessary to iterate questions to get deeper into the physics.
So, you’d better have a good idea WHAT the answer is before asking the question and be prepared to ask the question more carefully to elicit a more sophisticated answer. Grok is, thus, useful as a tool to check your back-of-the-envelope analyses and computations, Grok kindly asks if you want more, and the answer is very often – yes.
You will notice that Grok usually surveys 15 online sites with a few added references. I asked why 15? Grok said that was usually enough to answer any question.
Draw your own inferences from that.
I cannot speak to the honesty or balance of Grok, or any other AI.
I never use others since I do not trust their origin. I don’t trust Grok either so I do verify.
As to using Grok as a reviewer new science, try it on a few thousand already reviewed articles and compare the outcome. Let’s see how good Grok is and whether all those GWh are being usefully spent, or if AI is just an excuse to track every human 24/7.
THAT, I am quite sure AI can do,
“You will notice that Grok usually surveys 15 online sites with a few added references”
That brings up an interesting point. Does Grok (or any other AI System) actually retain any new information based on this search? It seems to me that this is a major part of real human intelligence. Humans retain their own data base of information based on their own intelligence and biases.
I wonder if the Climate Gate info was included in the “training” of any of the AIs?
Would one of them find fault with the “Fudge Factor” from the “HarryReadMe” file?
From what I have seen they learn in the context of an exchange with a specific user, even over multiple sessions, but that learning is not reflected in exchanges with others. However they are capable of holding multiple positions on a given issue so it is hard to tell if there is learning.
No according to Grok. Grok can, in your personal history, keep tabs on new information. Yet the “training” is all done in house by the developers. However Grok is very good at logic, and will accept a logical supposition dependent on facts assumed (for arguments sake) and say, If that is accurate Yor premise is correct or supported.
It is worth remembering, intelligence is reflected by the questions asked rather than by the answers given….
HT Voltaire, though I think Socrates was an earlier advocate.
Good point and I have not heard of bots asking a lot of questions. They may assume you are looking for information from them not the other way around but if so that is very limiting on their part. An interesting issue!
There are many sides not just two. In fact the number of sides increases exponentially with level of detail. That is the issue tree. The interesting question is how to see this complex structure? AI might help.
Does it really? They make stuff up all the time. Unless you thoroughly know the subject, you won’t know if the information you’re given is fully accurate.
Happily I fully know the subject having tracked the climate debate since 1992. WUWT comments are a good example of the wide ranging positions on the skeptical sides. AGW has a similar range. The debate is wondrously complex.
Interesting thread. I wonder how much influence DOGE had in Trump’s Gold Standard for Science EO? This smells a little Musky…
Go to chat GPT and enter a list of smoking guns about Climate Change and it will correct you with pure nonsense. WUWT should do that as an article. ChatGPT will literally take your challence like CO2 backradiation won’t warm water, and yet the oceans are warming. It will then instead of writing you an essay explaining that the the real science, it will correct you. Grok doesn’t do that.
WUWT should take undeniable facts and get ChatGPT to refute them, and then expose just how wrong and Biased ChatGPT is. Then compare it to Grok.
Here are plenty of smoking guns to test ChatGPT.
https://app.screencast.com/DFd1viHxsRjq7
https://app.screencast.com/ZMpNTvkLD7DDJ
https://app.screencast.com/OWq7twX7ELhEa
https://app.screencast.com/nvBbopmgleFm1
https://app.screencast.com/YhtT15qlGLIsC
Your smoking guns may actually be controversial. Back radiation certainly is.
perplexity knows it is biased and can swap positions.
https://www.cfact.org/2024/11/04/ai-knows-it-is-biased-on-climate-change/
I wonder If Chat can do this.
I do not use there bots but I work with people who do to assess the bot’s cognitive features. I once pressed Chat about Happer’s position and after some resistance it finally gave a good explanation.
“Grok even questions the reliability of climate models…”
With respect, we need to stop personifying Grok or any other AI machine. These software AI agents can be useful for massively accelerated research, for tightly specified computational tasks, and for rapid synthesis of directed composition, but it is not a person doing it.
The value of direct scientific observation and investigation must be held higher.
If it is as good as a person what is the difference? What we need is to better understand each bot’s personality, just as with the people we deal with.
Put another way: https://www.cfact.org/2024/03/16/ai-chat-bots-are-automated-wikipedias-warts-and-all/
Know thy source, human or AI.
“Know thy source, human or AI.” Agreed on that point.
About Elon, not Grok. I personally realized that there is a goal there with the carbon capture contests, space flight. Going to and staying on Mars, that technology is critical.
But solving that will not solve a bigger problem for a Mars Colony. What are they going to do about nitrogen? We swim in it. No one notices our dependency on it. They will when it is not abundant all around them.
Feynman:
“I’d rather have questions that cannot be answered than answers that cannot be questioned”.
Beautiful, and so relevant to AGW hype.
Did the AI ask for the exact location of John Conner?
While this is good news the thing to remember is someone directed Grok to be more neutral. I still have problems with AI.