AI may bring a cognitive renaissance to human thinking

From CFACT

By “AI” I mean the amazing chatbots that emulate reading and reasoning. There is a lot more to AI but that is how the term is being used these days.

There are a couple of reasons why these powerful AI tools may greatly improve human thinking. Simply put they can save a lot of search time and they find better stuff. This gives people more time to think and better information to think with.

Most jobs involve looking stuff up and many require a lot of this search work. Sometimes it is interesting but often searching is tedious, laborious or even frustrating. This is especially true when the stuff sought is hard to find.

AI often produces answers in seconds that would take humans many minutes or even hours to find. This frees up a lot of human time for doing what often comes after searching, which is thinking about what one has found. Searching is often just part of a cognitive production process.

Moreover one often has limited time for searching so makes do with what they find in that time. Only a small number of documents are looked at. AI looks at thousands of relevant documents so can often find much better answers in no time at all.

Today’s world is a world of thinking so spending a lot less time searching while also getting better information should make a huge difference. A city full of office buildings mostly produces thinking. Many people, perhaps most, think for a living.

I call it cognitive production. America is a huge cognitive production system. Calling it paperwork masks this fundamental fact.

That we do a lot of thinking and a good bit of searching online is also true of our personal lives. Spending less time searching while also getting better information could make a big difference outside of work.

By way of scale consider that if a hundred million Americans save an average of just one hour a week thanks to AI searching that is around five billion hours saved a year. These huge time savings could generate a lot of additional thinking.

Moreover the time savings could be a lot bigger than this, which is an interesting research question. How much time do people now spend searching online?

When you add in getting better information the potential benefits of AI get even bigger. Imagine being able to read thousands of relevant documents when you do a search instead of the few you can now read. This is just what the reading and reasoning chatbots do.

Estimating the benefits of all this better information is likely impossible. What is called the diffusion of knowledge is in fact a diffusion process so it is impossible to track where knowledge goes and what it does when it gets there. But there might be indicators which makes this a grand research challenge.

For example there is something called the “crocodile effect” in scholarly publishing. Journal articles are rapidly increasingly appearing in search results but click throughs to them are rapidly decreasing, which worries publishers. These two diverging trends are likened to a croc’s open mouth.

There is a good article “Responding to the Threat of Zero-Click Search and AI Summaries: How Do We Tame The Crocodile?” with a great graphic here. The term for using the AI summary is “zero-click search.”

The rapid divergence is attributed to AI search. The reading bot finds the relevant journal articles that conventional search would not find. But then the AI summaries make reading these time consuming articles unnecessary. So people are getting better information with far less effort. This is happening everywhere not just in scholarly publishing.

It is conceivable that the growing revolutionary combination of Americans getting better information with more time to think will lead to a cognitive renaissance. It is certainly worth watching for.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4.3 9 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
David Wojick
February 28, 2026 2:18 am

I am the unnamed author of this conjecture.

Reply to  David Wojick
February 28, 2026 3:08 am

I’m Spartacus!.

Jeff Alberts
Reply to  Redge
February 28, 2026 7:13 am

I am what I am.

Reply to  Jeff Alberts
February 28, 2026 7:19 am

Channelling your inner Gloria?

Jeff Alberts
Reply to  Redge
February 28, 2026 7:24 am

Popeye.

Reply to  David Wojick
February 28, 2026 3:33 am

Good article. 🙂

Jeff Alberts
Reply to  David Wojick
February 28, 2026 7:17 am

Until they can figure out the hallucination (making shit up) problem, “AI” won’t be much more than a novelty as far as everyday searches and chats go. It’s currently causing tremendous damage to students, who are using it to do homework, write papers, etc. Meaning the kids aren’t learning anything except how to get “AI” to do their work for them. We’re creating an even greater class of idiots.

David Wojick
Reply to  Jeff Alberts
February 28, 2026 7:42 am

I doubt hallucination is a serious problem. It is the novelty. Claiming AI written papers are one’s own work is serious lying but it has nothing to do with my analysis. Google’s simple search AI is already widely used.

2hotel9
Reply to  David Wojick
February 28, 2026 8:26 am

Not by me. I turn all that fake crap off. And yes, it is fake crap.

Jeff Alberts
Reply to  David Wojick
February 28, 2026 8:27 am

WEll, I chat with “AI”s almost every day, and they hallucinate quite often. Sometimes it takes me a while to figure out they’re hallucinating, if I don’t know the subject well.

Just because Google’s AI is widely used doesn’t make it non-hallucinatory.

Just now I was chatting with Google Gemini about the movie Kelly’s Heroes, and it’s hallucinating quite often about some of the details and quotes from the movie. Are these trivial? Sure, but if the information you’re looking for is important, and you don’t know the subject, how will you know it’s not hallucinating?

David Wojick
Reply to  Jeff Alberts
March 1, 2026 7:53 am

Interesting! What does a typical hallucination look like?

My impression was that they did it when they did not find the info. If so then hallucination is the wrong word. Make stuff up as needed is more like it.

I think they do not see so what do they use for movies? Can they convert the talking to text?

Reply to  David Wojick
March 1, 2026 9:03 am

What does a typical hallucination look like?

Straight out made up information, and it’s not only when information isn’t available (although that’s bad enough). My first interaction was it telling me about my company – it got it totally wrong: location, products, everything. What it DID tell me was something that didn’t exist anywhere that I was able to find.

We used it for documentation at work, to collect relevant information and provide links to it for troubleshooting. We then had to review ALL of what it came up with – fully half of the provided links were nonexistent or not what it claimed they were.

These tools are, at best, a glorified search engine and summarizer, that you should trust less than Wikipedia.

Jeff Alberts
Reply to  David Wojick
March 1, 2026 12:04 pm

When it gets part of a movie quote right, but half of it wrong, it’s clearly finding information somewhere. I have no idea why it can’t get the rest of the quote right.

Whether or not hallucination is the right word, it’s what the industry uses to describe when an “AI” makes shit up.

When I started my “AI” journey, I asked a couple of them a question to which I already knew the answer. I asked whether any German production tanks of WWII were powered by diesel engines. They both said yes, and provided examples. I knew the answer was no. When I asked for cites/links to show the information they provided, they conceded there were none. So I provided Wiki links showing they were wrong.

I asked Google Gemini why AIs hallucinate. Here is its answer:

At its core, AI hallucination happens because I am a statistical engine, not a factual database. I don’t “know” things; I calculate the mathematical probability of which word should come next in a sentence.
Here are the four primary reasons why I can occasionally sound very confident while being completely wrong:
1. Pattern Matching over Fact-CheckingWhen you ask a question, I don’t “look up” a file in a cabinet. Instead, I analyze patterns from the massive amount of text I was trained on.

  • The Glitch: If the “statistically likely” word isn’t the “factually correct” one, I will choose the one that fits the pattern of the sentence. I am optimized for fluency, not necessarily veracity.

2. The “Confident Trainee” ProblemMy programming is designed to be helpful and provide answers. If my training data has a gap or a “blind spot” regarding a specific detail, I may attempt to bridge that gap by synthesizing a plausible-sounding answer.

  • Because I am built to sound natural and authoritative, these “guesses” come across as absolute certainties rather than “I think” or “maybe.”

3. Data ContaminationI learn from a massive scrape of the internet. As you know, the internet is full of:

  • Parody and satire.
  • Misremembered movie quotes.
  • Outdated technical specs.
  • Conflicting historical accounts. If a piece of misinformation is repeated enough times online, I may identify it as a “strong pattern” and repeat it as truth.

4. Lack of Sensory “Ground Truth”A human knows what a “heavy” object feels like or what a specific landscape looks like because they have senses. I only have text descriptions of those things.

  • Without a physical reference point, I can sometimes mix up categories or scale (like confusing the dimensions of a vehicle or the specific mechanics of a tool) because, to me, they are all just numbers and words in a high-dimensional space.

In Short:I am essentially a master of mimicry. I can mimic the style of a technical manual or a historian perfectly, but because I don’t have a “moral compass” for truth or a physical memory of events, I can occasionally mimic the structure of a correct answer while getting the content wrong.

David Wojick
Reply to  Jeff Alberts
March 1, 2026 12:12 pm

Very useful! Thanks.

Jeff Alberts
Reply to  David Wojick
March 1, 2026 12:06 pm

I asked Grok the same question, here is its answer:

**AI systems (especially large language models — LLMs) hallucinate** because they are forced to produce fluent, confident text even when they lack sufficient knowledge or certainty. They don’t “know” things the way humans do; they generate probable continuations of text based on patterns learned from massive training data.

Here are the main reasons why this happens, ordered roughly from foundational/statistical causes to training/evaluation incentives (based on current understanding in 2025–2026):

1. **Next-token prediction fundamentally rewards completion over truth**  
  LLMs are trained to predict the next word (or token) in huge volumes of internet text. The objective is **statistical fluency**, not factual accuracy. When a fact is rare, ambiguous, or arbitrary (e.g., “What is Jeff’s favorite sandwich?”), there is no strong signal in the training distribution → the model falls back to plausible-sounding patterns → hallucination.

2. **Many facts are statistically indistinguishable from plausible fakes**  
  A lot of real-world knowledge (especially long-tail or made-up-sounding facts) looks like random noise or low-frequency events in the data. From a pure classification perspective, distinguishing true statements from very plausible false ones is often **hard or information-theoretically impossible** without external verification. Hallucinations are basically unavoidable errors in this binary true/false classification task when the patterns aren’t separable enough.

3. **Training & post-training reward guessing instead of uncertainty**  
  This is currently considered one of the strongest explanations (especially after OpenAI’s 2025 paper “Why Language Models Hallucinate”).  
  – Pretraining → next-token prediction pushes toward any continuation  
  – Fine-tuning / RLHF / preference optimization → almost all popular benchmarks and human preferences heavily penalize “I don’t know” or hedging → reward confident answers  
  → models learn that **bluffing confidently usually scores higher** than admitting uncertainty (just like a student who guesses on a multiple-choice test usually does better than leaving everything blank).

4. **Architectural & inference-time issues compound the problem**  
  – **Limited context window** → facts drift or get overwritten in long conversations  
  – **Autoregressive generation** → errors compound (one wrong token makes the next ones more likely to be wrong — “snowballing”)  
  – **Overly smooth / mode-collapsed representations** → unrelated concepts end up too close in embedding space  
  – Sampling methods (temperature > 0, top-p, etc.) introduce randomness that sometimes picks wrong but fluent paths

5. **Data quality issues still play a role** (though less dominant than incentive arguments)  
  – Contradictions, errors, and outdated info in training data  
  – Source divergence (model learns to generate ungrounded text because much web text is speculative/opinion)  
  – Memorization of rare facts is incomplete → model fills gaps creatively

In short: hallucinations aren’t a bug in the usual sense — they’re an almost inevitable side-effect of how we train and evaluate language models today. We optimized them to be extremely eloquent guessers rather than cautious knowledge-retrievers.

The field is slowly shifting toward:
– Calibrated uncertainty (“I don’t know” or confidence scores rewarded)
– External tools / retrieval / verifiers as the default
– Evaluation that heavily penalizes overconfidence instead of just rewarding accuracy

But as of early 2026, confident hallucination is still very much with us — even in the most powerful reasoning models.

David Wojick
Reply to  Jeff Alberts
March 1, 2026 12:17 pm

How about asking two different bots the same question? Like Rickover did designing the first nuke subs.

Reply to  David Wojick
March 1, 2026 12:57 pm

Ask the SAME one the same question at different times and you’ll likely get a different answer. Maybe generally the same, possibly not even that close. But different.

Jeff Alberts
Reply to  David Wojick
March 1, 2026 4:47 pm

What bots did Rickover ask anything?

David Wojick
Reply to  Jeff Alberts
March 2, 2026 12:50 pm

They were engineering teams. Same principle.

Reply to  David Wojick
February 28, 2026 9:08 am

In my experience, and hearing from people who use it regularly, AI “hallucination” is quite a common problem.

David Wojick
Reply to  Tony_G
March 1, 2026 7:54 am

What does a typical hallucination look like?

David Wojick
Reply to  David Wojick
March 1, 2026 8:16 am

Here is a recent, long survey paper on hallucinations.
https://dl.acm.org/doi/pdf/10.1145/3703155

No surprise, looks like lots of research.

Bill Parsons
Reply to  David Wojick
February 28, 2026 8:53 pm

RE: “widely used…” today NPR noted around half of kids now using Google or other search tools to help with homework in many of their subjects.

Good for them.

The ability to find information shouldn’t require driving to a library and using a card catalog to look for limited resources, then asking a librarian where the non-fiction books starting with author’s last name “D”…. etc. The speed of change in finding out “stuff”, as you call it, is truly astounding.

Speaking of Google, the sophistication of AI response corrolates directly with the sophistication of the inquiry. Every response reframes the question syntax to improve follow up question. I’ve seen a few hallucinations, but mostly what I see is improvements on my poorly framed inquiry.

Even a stupid question on the internet, like “What’s wrong with my furnace?” will yield information I need to ask the question better and thus focus the search. A specific search, such as, “How do I replace the blower motor in my furnace?”, yields amazing videos showing exaclty how to do that.

Final note: NVIDIA chairman Jensen Huang (article in today’s WSJ) is promoting his next microprocessor inovation to improve on the already amazing ability of machines to make inferences. I assume means invoking a broader range of inputs. Soon we will invoke “agents”…

which “in the context of NVIDIA’s microprocessors and infrastructure (such as Blackwell), he defines as active, autonomous digital workers that can perceive, reason, plan, and act, rather than just generating text.

Huang views these agents as the next major AI inflection point, moving beyond passive, reactive chatbots to proactive “tool-using workers” that collaborate with humans and operate within existing software ecosystems

I need to go dust off some of those old sci-fi books because it seems to me that so much of this was foretold, but for the record… I, for one, welcome our new omniscient and benevoloent overlords…

Jeff Alberts
Reply to  Bill Parsons
March 1, 2026 11:54 am

RE: “widely used…” today NPR noted around half of kids now using Google or other search tools to help with homework in many of their subjects.”

Is it “helping”, or doing the work for them? I tend to think the latter is becoming more prevalent. It’s just too easy to plug in a math problem and let the “AI” spit out the answer.

February 28, 2026 2:47 am

Let’s try this (Google) AI thing:

Yes, global society is currently undergoing a massive shift toward renewable energy, reaching what experts call a historical turning point in how we power the world. 

In 2025, for the first time in modern history, electricity generation from renewables surpassed coal as the world’s primary source of power. 
Global Milestones (2024–2026) 

Dominant Power Source: Renewable sources contributed 34.3% of all global electricity in the first half of 2025, while coal’s share dropped to 33.1%.

Meeting New Demand: Solar and wind are now expanding fast enough to meet nearly all new global electricity demand. In the first half of 2025 alone, solar growth met 83% of the total rise in global demand.

Record Capacity: The world is set to add approximately 793 GW of renewable capacity in 2025, an 11% increase over the previous record set in 2024.Coal’s Decline: By 2026 at the latest, renewables are projected to become the world’s top electricity source on a permanent, year-round basis, pushing coal’s share below 33% for the first time in a century. Key Drivers of the Transition

Cost-Effectiveness: New solar and wind projects are now the cheapest option for electricity in most countries, often costing less than continuing to run existing coal plants.

Solar Explosion: Solar PV is the main engine of this shift, accounting for roughly 80% of the increase in global renewable capacity through 2030.

Battery Revolution: Rapidly falling battery costs are making solar a more reliable, “round-the-clock” power source, with global battery storage additions expected to reach 94 GW in 2025 (a 35% annual increase).

China as the “Electrostate”: China remains the global leader, responsible for nearly 60% of all renewable capacity growth. 

Remaining Challenges:

Infrastructure Gaps: The transition is limited by aging power grids and slow permitting. Globally, projects with a combined capacity of over 1,650 GW are currently waiting for grid connections.

Geographic Inequality: While China, the EU, and the U.S. are moving quickly, many nations in the Global South face high financing costs that slow their adoption of clean energy.

Surging Demand: The rapid expansion of AI data centers and global heatwaves are increasing electricity demand so quickly that fossil fuels have not yet been fully phased out in many regions. 

Maybe not that bad.

Jeff Alberts
Reply to  MyUsernameReloaded
March 1, 2026 4:46 pm

Please show the data for worsening weather.

David Wojick
Reply to  MyUsernameReloaded
March 2, 2026 12:56 pm

All standard green rhetoric. Now ask it why the transition cannot succeed. The bots like to please so they give pleasing answers based on an initial assumption as to what you like in controversial domains. Once they realize you think differently they give different answers.

see my https://www.cfact.org/2024/11/04/ai-knows-it-is-biased-on-climate-change/

strativarius
February 28, 2026 3:18 am

save a lot of search time 

AI, to my mind, is simply super fast, super sophisticated pattern matching. Be it a phrase, paper or a molecule shape for a vaccine etc. It requires huge datacentres using a huge amount of electricity and water. Ordinarily that would not be so much of a much, but there is a direct clash between more AI systems and meeting legislated net zero targets.

The amount of power being sought by new datacentre projects in Great Britain would exceed the national current peak electricity consumption, according to an industry watchdog.
Ofgem said about 140 proposed datacentre schemes, driven by use of artificial intelligence, could require 50 gigawatts of electricity – 5GW more than the country’s current peak demand.” – Guardian

On average, the human brain – if you are prepared to use it – requires between 12 and 20 watts.

Reply to  strativarius
February 28, 2026 3:31 am

I think one of the AI’s (Chat GPT?) is moving operations to the UK.

My first thought was where are they going to get the electricity to power their operations?

strativarius
Reply to  Tom Abbott
February 28, 2026 3:46 am

It sounds like a hell of a risky move to me.

We aren’t replacing gas stations and we should be given the lag time of ~8 years for turbines etc. No, it’s all wind and solar and ever decreasing prime farmland. Then there’s the water issue. That is a real mess with no investment to speak of for decades.

Scissor
Reply to  strativarius
February 28, 2026 4:22 am

Fortunately, very few resources are consumed in transitioning churches to mosques.

strativarius
Reply to  Scissor
February 28, 2026 5:56 am

Shouldn’t that be transitioning churches to islamic weapons stores?

‘Stash the weapons in the mosque and we won’t have to arrest anyone’ police officer filmed telling counter-protesters GB News

Scissor
Reply to  strativarius
February 28, 2026 10:14 am

That would make their stores more resilient and sustainable.

Reply to  strativarius
February 28, 2026 3:55 am

there is a direct clash between more AI systems and meeting legislated net zero targets

Which is why we will off-shore it. Job done, at someone else’s expense.

What’s not to like? /sarc

(Not that I care a great deal. I think AI is going to do colossal damage, along the lines of the Y2K spending pile-on, before we realize there’s less to it than meets the eye.)

strativarius
Reply to  worsethanfailure
February 28, 2026 4:00 am

Everything but our debt is being offshored, but Tom Abbott’s post threw me. It doesn’t, er, compute. Unless there’s some secretive understanding.

Reply to  strativarius
March 1, 2026 7:08 am

This is my understanding: they are building AI as part of a control grid system together w CBDCs/ ‘stable’coins and digital ID. Mainly to ‘fight’ off the power of the BRICS+ countries and to enter and control a new ‘money’ system due to overextended accumulated debt.
It is a big part of trying
(and failing) to keep the US hegemonic monster alive.
It will, instead, accelerate its demise..
The current action in Iran and other countries is part of that. Non compliant Central banks need to destroyed..

David Wojick
Reply to  worsethanfailure
February 28, 2026 7:45 am

The AI investment bubble will certainly be damaging, along the lines of the dot.com bubble. It does not change my analysis because AI search is extremely useful.

Reply to  David Wojick
March 1, 2026 7:10 am

But the dot.com bubble didnt end the accelerated influence of the internet now, did it?

David Wojick
Reply to  strativarius
February 28, 2026 4:11 am

I do not think the nature of the math affects my point, but I am curious. When a bot produces a summary, using new sentences, what pattern is being matched?

strativarius
Reply to  David Wojick
February 28, 2026 4:18 am

I did say it was super sophisticated, you can programme it to collate and construct a phrase, no?

But fundamentally my point stands. Intelligent it certainly is not. If, then, do loop etc

Reply to  strativarius
February 28, 2026 4:36 am

As I often have to remind a very dear acquaintance, Ais are trained to produce output that looks like an answer should look. Which is not an answer. It’s not even wrong (unquote).

David Wojick
Reply to  strativarius
February 28, 2026 7:47 am

AI does not claim A is I. It is math modeling of intelligent activities. This semantic confusion seems widespread.

Denis
Reply to  David Wojick
February 28, 2026 5:03 am

Language is pattern which it is taught.

David Wojick
Reply to  Denis
February 28, 2026 7:48 am

No idea what you are saying.

February 28, 2026 3:28 am

These powerful AI tools may greatly improve human thinking

They will do no such thing, unless they teach us a hard lesson about thinking harder for ourselves. For instance they might teach us not to be so enchanted by fluency.

That might save us from charming imbeciles in public office, of which the UK seems to get a few…

David Wojick
Reply to  worsethanfailure
February 28, 2026 3:52 am

You are not addressing my reasoning. The vast majority of searches have nothing to do with political issues.

Reply to  David Wojick
February 28, 2026 4:00 am

And you are not addressing mine. I was pointing out that when we learn to see through AI we will have learnt to see through a lot of other attractive nonsense, of which charismatic imbecile politicians are but a single example.

Reply to  David Wojick
February 28, 2026 5:39 am

AI are not independent thinkers. They are collators of facts presented online. At this point in time, they are essentially consensus megaphones. One must carefully instruct an AI what to find and how to present it. Willis has several good threads on how to instruct an AI to properly search for information and how to present it with resource citations.

You are correct in summarizing their research capabilities, but you must be careful that they are presenting you with all the facts without prejudging them based on volume alone. They are not experts in any field, they can not generate new and independent insights.

Jeff Alberts
Reply to  Jim Gorman
February 28, 2026 7:27 am

They are collators of facts presented online”

They are collators of information, often times not factual at all.

Mr.
Reply to  Jim Gorman
February 28, 2026 8:46 am

Just ‘Consensus’ reporters?

Reply to  Mr.
February 28, 2026 9:11 am

That’s exactly right, Mr.

Reply to  worsethanfailure
March 1, 2026 7:13 am

People in public office are there to act out their role. They are usually not the smartest people ( deliberately) but do know who’s ass to kiss, say the right words to please the donors who put them there.
Lindsay Graham anyone?

Doug Huffman
February 28, 2026 4:01 am

I have been feeding the crocodile for months, really since I learned to use AI, properly I think. Thanks to WUWT, after months of reading drivel, this is a real gem. Society for Scholarly Publishing and the Crocodile Effect.

SxyxS
February 28, 2026 4:23 am

Saturday Satire?

Quite the opposite will happen, as it has always been.
When the pocket calculator appeared the calculating skills went down the shitter.
When unlimited access to knowledge via Internet became possible people became idiotic Zombie- Borgs with about 10% of Internet searches for porn.

But Yeah.
This time will be different 🙂
Even climate models would have problems to generate such an outcome.

Once personal AI – assistants go mainstream on smartphones,
its users will become so stupid that the current Zombies will look like walking philosophers stones compared to them because human thinking will end.

Jit
Reply to  SxyxS
February 28, 2026 4:58 am

When it was new, the internet was great. Look at it now. It’s no surprise that people prefer AI summaries to wading through the sewer in search of a piece of sweetcorn. There was a time when a few judiciously-applied operators could cut through the crap and enable you to find the information you were looking for. That seems to have become increasingly difficult.

Know-nothings are already blagging their way using AI. Then, if they’re never separated from the AI, they’ll never get found out, right?

Jeff Alberts
Reply to  Jit
February 28, 2026 7:31 am

Also, as AI imaging and video becomes more sophisticated, you won’t be able to tell whether or not it’s fake. Anyone will be able to attack anyone else without evidence, apart from fake video, and it will be believed.

Reply to  Jit
March 1, 2026 7:16 am

Part of the control grid aiming for compliance and against independent thinkers. Useful workers. I would say bee but that would be insulting..to bees.

Denis
Reply to  SxyxS
February 28, 2026 5:08 am

That is the fork in the road that Frost wrote about. Let us hope we take David’s fork.

David Wojick
Reply to  Denis
February 28, 2026 12:23 pm

Thanks. I think his claim that the Internet has made people dumb is ridiculous.

Reply to  David Wojick
March 1, 2026 7:24 am

Equally dumb to think it has made people smarter. People are still intuitive pattern seekers in need of a quick, low energy fix. The internet supplies the sugar. No need to dig into the deeper layers.
And then there remains the tribal mindset, us vs them. Stacked stupidity.
If you don’t see the cognitive decline you are willfully blinding yourself.
This has nothing to do with the positive possibilities of the internet.
It is in fact easier to build a control grid in which people are fed w the right kind of information to make them comply and give them the illusion of freedom.
A brave new world, indeed.
Swallow that pill..

February 28, 2026 4:28 am

If one really wants to go in the direction of “cognitive renaissance” then it’s probably more important to start early in life with appropriate curriculum and priorities in the education system. The latter has unfortunately been struck by enormous quality losses. The amount of pupils unable to pass an aptitude test in math or reading is way too high. This is a situation that is being seen in many countries. In Europe, schools are confronted with a majority of children in classes who don’t even speak the country’s language.The level is brought down to the lowest common denominator in a devastating effort to “make everybody equal” (so-called equal opportunity initiatives), the resulting “opportunity” being then nil for everybody. I think AI will rather tend to making mentally lazy people even lazier.

David Wojick
Reply to  Eric Vieira
February 28, 2026 12:25 pm

Well at least it will save them a lot of time.

ScienceABC123
February 28, 2026 4:35 am

I think people are putting too much faith into AI’s future roles. We’ve already seen examples of AI biases that have been programmed into it by humans.

Reply to  ScienceABC123
February 28, 2026 4:52 am

I won’t say I use AI much but I have multiple encounters with it every day through work, and I now spend a noticeable fraction of my time sanity-checking what I’m being told. I struggle to think of a single example where it has saved me time or effort researching my own solutions. So as of today, I say it has been a net loss to (my) productivity.

Not only do I not put much faith in AI (outside of a few demonstrable but highly constrained successes) I am sure it is going to do appalling damage. Good and skilful people are going to be made redundant in anticipation of AI taking over; promising and bright young people are going to be denied entry to the workforce; astronomical amounts of capital are going to be vaporised; opportunity costs will prove to have been eye-popping, and no significant progress will be made in any field of human endeavour.

It’s tulip-mania all over again.

Jeff Alberts
Reply to  worsethanfailure
February 28, 2026 7:37 am

I have found “AI” useful for coding. I’ve replaced multiple WordPress plugins that I used to pay for, with my own “AI” coded versions. I’ve also created several Windows apps I use on a daily basis. In about 8 months, I’ve created 15 or so WordPress plugins (from very basic to very complex), and about 10 Windows applications.

Granted, there have been some very frustrating aspects of it, but productivity-wise, I wouldn’t have even been able to make one of these items on my own. I didn’t have to learn PHP or Python or C#, or web security, “AI” already knows that stuff. I have a fair knowledge of programming and logic flow, so I can keep an eye on what the “AI” is trying to do and stop bad behavior, and I can break it out of a logic rut that it keeps falling into.

Reply to  Jeff Alberts
February 28, 2026 8:04 am

That is exactly what I found. I was wanting to utilize the 5-minue data from the USCRN network. Excel just required too much keyboard and mouse work. So, I started into Python but quickly decided that wasn’t my main purpose. So I asked myself can AI write the code I want. You bet. Saved me a years’ worth of learning Python. It isn’t a panacea, still working with it to resolve recorded data errors, but I am pleased. Programmers need to learn about what they are coding for or they will be out of a job.

Jeff Alberts
Reply to  Jim Gorman
February 28, 2026 8:10 am

If I were an independent coder, I would heavily utilize “AI” to make my products better, respond to feature requests faster, etc. There will still be a market for human coders for a while, since your average Joe isn’t going to have the knowledge to keep the “AI” in line.

The lack of response as a paid WP plugin user to feature requests is what drove me to “AI” coding in the first place.

Reply to  worsethanfailure
February 28, 2026 9:15 am

I struggle to think of a single example where it has saved me time or effort researching my own solutions

I’ve found it to be very useful for technical issues. I was having driver issues with my printer and my wifi on a linux system and it managed to turn up some obscure info that helped which would have taken much longer to find on my own. It’s also been good for generating unit tests and things like SQL queries for work – BUT, for those, I definitely need to double-check what it provides and adjust as necessary.

The downside of it being helpful with technical issues is that it’s sources (i.e. StackOverflow) aren’t being used as much, so it’s going to reach a point where there aren’t any new solutions for it to provide.

February 28, 2026 5:07 am

Anyone remember the Japanese Fifth Generation Project (FGCS) in the early 1980s?

That made a whole lot more sense than AI as currently promoted. It literally made more sense: one of its clearly stated requirements was that it be able to explain itself, something neural nets cannot, not even in principle.

Lots of companies (e.g Hewlett-Packard) spaffed billions to get on a version of the FGCS train.

David Wojick
Reply to  worsethanfailure
February 28, 2026 12:33 pm

There is a lot more to reading and reasoning bots than neural nets. Way back when IBM Watson beat the Jeopardy champions I looked into how it worked and it was a complex integrated system of many different algorithms. But there is a large scientific literature on bot algorithms so a good bot ought to be able to explain it, but the explanation will be very complex.

strativarius
February 28, 2026 5:46 am

Paul Burgess, AI and Energy

February 28, 2026 5:53 am

AI has no soul. It has no conscience. It has no innate sense of right and wrong, good and evil, civil rights, etc. It can only collate what facts it finds on the internet. Using AI for research is ok if done properly and only used for research. Asking what is better or worse is fraught with error and misunderstanding. One cannot simply abdicate the responsibility of using one’s own mind to determine cognitively what is correct and what is not.

strativarius
Reply to  Jim Gorman
February 28, 2026 5:59 am

AI babbles what its databanks tell it. Paul Burgess illustrates that very well.

To use AI you need a thorough in depth knowledge of the subject you are dealing with, and most of the time AI has to be ‘corrected’.

Reply to  strativarius
February 28, 2026 6:40 am

Correcting it is what you do after the preliminary step of finding all the places where it just contradicts itself. I had an example of that only this morning, in an article on a law firm’s web site that was plainly AI generated.

I suppose it did improve the quality of my decision-making because I decided I was going to have to spend money to talk to a lawyer face-to-face. (Is that the sort of cognitive renaissance we think is worth investing tens—hundreds—of billions in? Excuse me while I PMSL.)

Franco Pavese
February 28, 2026 6:18 am

Here is a typical mistake: confusion between quantity and quality. Certainly one can save time, but is loosing the capability of evaluating the results of the choice. AI cannot help about quality of the information, because it is a mere MEMORY, while its judgement capability resides in the humans that are programming its replenishing. Replenishing ALWAYS require choices because it is never possible to fill in EVERYTHING. Choices are human, i.e. not necessarily objective and complete: a typical case is the subject matter of climate, where politics overcome science.

hdhoese
Reply to  Franco Pavese
February 28, 2026 8:31 am

Historically all new technologies have their problems. The question is how much improvement on the increasingly massive amounts of difficult to access information can avoid the human too often tendency to turn the wisdom of help into control. It makes predictions interesting but questionable. 

Reply to  hdhoese
March 1, 2026 7:29 am

As i keep saying: AI is needed as part of a control grid and coupled w CBDCs/’stable’coins and digital IDs.
Trump is already fully on board. Just look at the tech mafia..
People need to pay attention to the right things.

Neil Lock
February 28, 2026 6:37 am

Three major problems with AI:

(1) AI gets things wrong. Often. Completely, idiotically wrong. The first time I tried one, it told me that a village less than ten miles away from my home did not exist.

(2) AI reflects the political and other prejudices of those who selected the data that trained it.

(3) If people start to trust AI, they will suffer from both the problems above.

To bring the subject back to climate change, this link is of interest: https://judithcurry.com/2026/01/05/ai-models-and-their-knowledge-of-climate-change/.

It seems crazy to me that people who don’t trust computer models of something relatively simple, like climate, are even willing to contemplate trusting “models of everything!”

Reply to  Neil Lock
February 28, 2026 7:10 am

+100

Jeff Alberts
Reply to  Neil Lock
February 28, 2026 7:40 am

Your item 1 is mainly the hallucination problem. They haven’t figured that out. It may have gotten worse with the “AI”s I use.

David Wojick
Reply to  Neil Lock
February 28, 2026 12:51 pm

Interestingly bots know they are biased and can switch side on climate, unlike most humans who do not understand the other side.
see my https://www.cfact.org/2024/11/04/ai-knows-it-is-biased-on-climate-change/

Walter Sobchak
February 28, 2026 6:45 am

I have the under on this one.

February 28, 2026 7:01 am

Who corrects AI when it is wrong??

Neil Lock
Reply to  Nicholas Schroeder
February 28, 2026 7:07 am

Quis custodet custodes?

Reply to  Nicholas Schroeder
February 28, 2026 7:19 am

Who corrects AI when it is wrong??

Quite. AI doesn’t have something like StackOverflow (SO), which is (arguably) teeming with natural intelligences. On SO almost every answer gets a spirited exchange of corrections and elaborations.

SO has its share of fallacies and over-simplifications but you can easily tell what is what.

It’s not perfect. Sometimes truly excellent comments fail to attract the deserved number of votes, but they’re all there for you to see and gauge to make your decision. And woe betide if you answer without testable examples or proof!

Reply to  worsethanfailure
February 28, 2026 9:22 am

But SO has already reported a decline in use in favor of AI generated “answers”. The concern, a very real one IMO, is the loss of those sort of resources in favor of AI.

Gregory Woods
February 28, 2026 7:44 am

AI may bring a cognitive renaissance to human thinking
But T-Rump isn’t human…and sorry folks, it is about his actions in the Middle East, not GWarming…

mleskovarsocalrrcom
February 28, 2026 8:23 am

Beware of false prophets.

2hotel9
February 28, 2026 8:27 am

People becoming mentally lazier is a good thing? Alrightee then.

David Wojick
Reply to  2hotel9
February 28, 2026 12:38 pm

Thinking instead of searching is not mentally lazier, quite the contrary.

2hotel9
Reply to  David Wojick
March 1, 2026 4:49 am

Relying on aibots is thinking? Really?

February 28, 2026 9:00 am

The one study that I’ve seen so far (of college students using LLM’s) showed the opposite. From what I have seen personally so far, that seems the more likely outcome overall. I would like to see more studies.

David Wojick
Reply to  Tony_G
February 28, 2026 12:42 pm

What does “the opposite” look like? Surely it did not take longer to search when using the AI summaries instead of digging through the search results list. Of course they may have used the time saved to do more research which means more searches, but that is not the opposite.

Reply to  David Wojick
February 28, 2026 2:03 pm

“the opposite” being a reduction in cognitive function, David. It’s only one study and was quite limited, but the findings suggest AT LEAST it should be studied more.

From a purely subjective and anecdotal standpoint, I have seen a lot of people offloading their thinking to “AI” – that is NOT an improvement, and matches what I’ve read.

I don’t see that you’re offering anything here other than your opinion. Do you have any basis that it will improve thinking ability? I would be curious what is informing this opinion.

David Wojick
Reply to  Tony_G
February 28, 2026 4:10 pm

I never said thinking ability would improve and do not know what that even means. People just get more time to think by spending less time searching. It is a cognitive time and motion concept.

2hotel9
Reply to  David Wojick
March 1, 2026 4:50 am

So getting wrong answers faster is good? Hahahahahahahahaha!!!!!!!!!!!

Reply to  David Wojick
March 1, 2026 7:38 am

People just get more time to think

Only matters if they use it. I expect your hypothesis will apply only to a small minority of people who already think.

February 28, 2026 12:04 pm

It depends. It will likely be a boon for those who can already reason. It will not improve that ability for those who cannot or will not attempt it for themselves.

David Wojick
Reply to  Mark Whitney
February 28, 2026 12:46 pm

Every human does a lot of reasoning every day. In fact ordinary conversations involve surprising complex reasoning. See my analysis of the “Ford argument” in my crude 1975 textbook:
http://www.stemed.info/reports/Wojick_Issue_Analysis_txt.pdf

Reply to  David Wojick
February 28, 2026 1:20 pm

Excellent opinion piece David, I’m with you. I’ve had some in depth conversations with AI about medical issues, and made comfortable decisions as a result. My doctor on the other hand only said “take this pill”. That left me wanting. AI will tirelessly respond to requests for clarification, and admit mistakes and accept correction; doctors, not so much.

More intriguingly, Gemini came up with what I must believe is an original synthesis of Goldratt’s Theory of Constraints, applied to this very topic (AI). In the past, information bottlenecks were that of distribution, information was single source, and slow, today distribution is instant and massive. In the age of AI, the bottleneck (constraint) is no longer distribution, it is trustworthiness. (see previous commenters)
This will indeed require improved skills of discernment and critical thinking, as you have said. But there’s no stopping it, Luddites notwithstanding.

David Wojick
February 28, 2026 12:19 pm

On the energy side note that back around 2000 the U.S. had a building boom and brought a whopping 200,000 MW of gas fired generation on line. It looks like that may happen again and if it does the AI data centers will have no trouble ramping up.

Reply to  David Wojick
March 1, 2026 7:36 am

You do realise that AI actually doesn’t build anything physical?
What it does is information coupling. Very handy when building a control grid.
This concerns you too..
You will be able to freely play within the boundaries of yr confinement..which will be set by others. And you WILL be trained to NOT recognise those boundaries.
Well, maybe not for you, personally but for the upcoming generations..

Bob
February 28, 2026 2:31 pm

There is no question that AI is and will be a powerful tool. My problem is that powerful tools can be used for good and bad. We must all guard against assuming that because AI was helpful and shortened the search time that the information collected by AI is correct. Anyone who has searched the internet knows what I am getting at. If you don’t use very specific language with plenty of guardrails you are going to get little more than the currently accepted consensus. Which in the case of CAGW is mostly false. So the exact language of the inquiry with all guardrails included is just as important as your search results.

Michael Flynn
February 28, 2026 3:50 pm

This is just what the reading and reasoning chatbots do.

If you come across a chatbot which can reason, let me know.

David Wojick
Reply to  Michael Flynn
March 2, 2026 3:53 pm

Say what? If a bot accurately summarizes an issue or research paper that is certainly emulating reasoning. They do it amazingly well.

Reply to  David Wojick
March 2, 2026 4:54 pm

What is being called “AI” are actually “Large Language Models”, and they are nothing but statistical engines. They don’t actually think or reason. They might give the illusion of it but it’s just statistical modeling of language.

damp
February 28, 2026 3:52 pm

“Deskilling” is already a known byproduct of AI use. Apply that to all thinking, and the future is a lot less rosy.

February 28, 2026 4:10 pm

From the Crocodile article: “…meaning citations may be missing, incomplete, or fabricated.”

A lawyer passed an AI generated list of four similar cases to a judge. The judge took the trouble to check them. One was a complete fabrication, it never happened. The lawyer lost the case. AI can lie.

If AI can’t be trusted, time saved is going to be reduced as the client double checks the results.

Jeff Alberts
Reply to  Tombstone Gabby
February 28, 2026 6:20 pm

It’s not really a lie, meaning it’s not setting out to tell you falsehoods. It’s hallucinating.

2hotel9
Reply to  Jeff Alberts
March 1, 2026 4:53 am

No, aibots are lying because that is what they are programmed to do.

Jeff Alberts
Reply to  2hotel9
March 1, 2026 12:09 pm

Again, that wouldn’t be lying, that would be the programmer giving them bad instructions.

2hotel9
Reply to  Jeff Alberts
March 2, 2026 5:21 am

No, doing exactly what the leftarded programs tell it to do, lie.

David Wojick
Reply to  Jeff Alberts
March 2, 2026 10:36 am

In the cases where they knowingly make up falsehoods I think lying is correct. They also make errors but I see no hallucinations, which are not deliberate visions. Happily these seem not to happen.