AI may bring a cognitive renaissance to human thinking

From CFACT

By “AI” I mean the amazing chatbots that emulate reading and reasoning. There is a lot more to AI but that is how the term is being used these days.

There are a couple of reasons why these powerful AI tools may greatly improve human thinking. Simply put they can save a lot of search time and they find better stuff. This gives people more time to think and better information to think with.

Most jobs involve looking stuff up and many require a lot of this search work. Sometimes it is interesting but often searching is tedious, laborious or even frustrating. This is especially true when the stuff sought is hard to find.

AI often produces answers in seconds that would take humans many minutes or even hours to find. This frees up a lot of human time for doing what often comes after searching, which is thinking about what one has found. Searching is often just part of a cognitive production process.

Moreover one often has limited time for searching so makes do with what they find in that time. Only a small number of documents are looked at. AI looks at thousands of relevant documents so can often find much better answers in no time at all.

Today’s world is a world of thinking so spending a lot less time searching while also getting better information should make a huge difference. A city full of office buildings mostly produces thinking. Many people, perhaps most, think for a living.

I call it cognitive production. America is a huge cognitive production system. Calling it paperwork masks this fundamental fact.

That we do a lot of thinking and a good bit of searching online is also true of our personal lives. Spending less time searching while also getting better information could make a big difference outside of work.

By way of scale consider that if a hundred million Americans save an average of just one hour a week thanks to AI searching that is around five billion hours saved a year. These huge time savings could generate a lot of additional thinking.

Moreover the time savings could be a lot bigger than this, which is an interesting research question. How much time do people now spend searching online?

When you add in getting better information the potential benefits of AI get even bigger. Imagine being able to read thousands of relevant documents when you do a search instead of the few you can now read. This is just what the reading and reasoning chatbots do.

Estimating the benefits of all this better information is likely impossible. What is called the diffusion of knowledge is in fact a diffusion process so it is impossible to track where knowledge goes and what it does when it gets there. But there might be indicators which makes this a grand research challenge.

For example there is something called the “crocodile effect” in scholarly publishing. Journal articles are rapidly increasingly appearing in search results but click throughs to them are rapidly decreasing, which worries publishers. These two diverging trends are likened to a croc’s open mouth.

There is a good article “Responding to the Threat of Zero-Click Search and AI Summaries: How Do We Tame The Crocodile?” with a great graphic here. The term for using the AI summary is “zero-click search.”

The rapid divergence is attributed to AI search. The reading bot finds the relevant journal articles that conventional search would not find. But then the AI summaries make reading these time consuming articles unnecessary. So people are getting better information with far less effort. This is happening everywhere not just in scholarly publishing.

It is conceivable that the growing revolutionary combination of Americans getting better information with more time to think will lead to a cognitive renaissance. It is certainly worth watching for.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
4.3 9 votes
Article Rating
125 Comments
Inline Feedbacks
View all comments
Neil Lock
March 1, 2026 1:47 am

There is another aspect, which I’m a bit surprised no-one here has brought up – legal responsibility for the effects of AI failures, misinformation etc. It looks as if the legal framework is still in its infancy. This article (from a UK/EU perspective) discusses some of the issues: https://www.taylorwessing.com/en/insights-and-events/insights/2025/01/ai-liability-who-is-accountable-when-artificial-intelligence-malfunctions.

March 1, 2026 4:51 am

There’s a lot to unpack from this blog, but I’ll try to be brief.
The search “problem” stems from the fact that the majority of search engine users simply haven’t got the faintest idea about how to use those tools.
Yes they are tools and the user must learn and practice how to use them. Compare this to me with a set of brushes some paint and a canvas. I will produce rubbish which only my mother will admire. Give the same tools to Rembrandt (pretend he is still alive) and he will produce something beautiful everybody will admire.

It is very odd for these LLM’s, because that is what is meant today by AI, to actually be called AI. LLM stand for Large Language Model There is no universally agreed definition of intelligence and the only somewhat accepted test for intelligence is recognizing a reflection in a mirror. AI’s have no reflection in a mirror and thus cannot perform that test. So why call it Artificial Intelligence?
Also the term artificial seems strange here. Artificial as opposed to what? Natural? How do we know that “traditional” intelligence is natural and not gifted by some superpower or a bunch of aliens?That would be truly artificial.

The hallucination problem or lying, fantasizing, making things up or whatever you want to call is of course the death sentence for these LLM AI’s. But it is IMHO not unexpected. These LLM’s are probabilistic machines. Their output however is deterministic. It’s A or B. There is no Schroedinger’s cat here.
I would then also except a fifty fifty chance of the right or the wrong answer. And that’s what LLM’s produce: 50% is hallucinated. A coin toss. This means that the user has to check the output of the LLM without using another AI because that would exacerbate the problem. Never gonna happen so basically useless.

In my live I have been witness to inventions, discoveries and other developments that were supposed to be revolutionary. Mankind would be overwhelmed with all the spare time because working would be so easy and so on so forth.
The opposite turned out to be true, with every “revolution” ordinary people needed to work harder and be more productive. This has been going on in the last 50 years. Remarkably wages have been flat while productivity has been growing. Makes you wonder doesn’t it?

Finally if you really want to know the nitty gritty of why humans are not very efficient thinkers read “Thinking, fast and slow” by Daniel Kahneman. You will learn more about the method of thinking than any AI can ever hope to understand.

Coach Springer
March 1, 2026 5:17 am

“This gives people more time to think and better information to think with.” In our experience, more time hasn’t made more thinking a reality. And think about what?

March 1, 2026 7:00 am

Building the control grid AND increase cognitive rennaisance?
For the happy few that control the system.
Panopticon rules..

Bill Marsh
Editor
March 1, 2026 7:56 am

Therein lies the problem that 1st arose with the development of ‘AI’ calculators. People rapidly defaulted to accepting the answer provided without bothering to think if it made sense (We used to call it a ‘sanity’ check to make sure we hadn’t input the wrong data.)

Reply to  Bill Marsh
March 2, 2026 11:44 am

Prior to ‘inputting on a calculator’ – keeping track of where the decimal point goes when using a slide rule.

(My first handheld calculator, a Sinclair Scientific, in the early 70’s. RPN input. My co-workers couldn’t figure it out. RPN? Reverse Polish Notation.)

David Wojick
March 1, 2026 8:13 am

Here is a recent, long survey article on hallucinations:
https://dl.acm.org/doi/pdf/10.1145/3703155

Reply to  David Wojick
March 2, 2026 11:50 am

A quick quote from page 3:

” Furthermore, we comprehensively outline a variety of effective detection methods specifically devised for detecting hallucinations in LLMs, as well as an exhaustive overview of benchmarks related to LLM hallucinations, serving as appropriate testbeds to assess the extent of hallucinations generated by LLMs and the efficacy of detection methods. Beyond evaluation, significant efforts have been undertaken to mitigate hallucinations of LLMs.”

David Wojick
Reply to  Tombstone Gabby
March 2, 2026 3:56 pm

Yes there is a lot of work on fixing this problem.

heme212
March 1, 2026 12:40 pm

just like TVs did.

David Wojick
March 2, 2026 10:38 am

I want to thank many people for very useful comments and new information. This is part of what makes WUWT great.

Michael Flynn
Reply to  David Wojick
March 2, 2026 5:39 pm

David, maybe this will help.

I asked CharGPT “how do I know your answer is correct?”

Partial response –

That’s a very good question — and a fair one, especially given your past frustrations with errors.

Here’s the honest answer:

You don’t “know” just because I say so. You know an answer is correct when it satisfies one or more of these tests:

followed by a list of things, followed by –

Given your style of questioning, you’re already applying these tests instinctively. That’s why you catch inconsistencies.

Which is a bit sad – the “style of questioning” affects the accuracy of the answer, and instinctive application of “tests for truth” helps.

Even better – as Feynman said –

It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.”

AI is an opinion summariser. All the opinion in the world (plus $5 cash) will be enough to buy a $5 cup of coffee.