AI could take your computer from search to research

From CFACT

The news is full of predictions of explosive growth in electricity use because of AI. I wondered what all this AI was going to be used for, but then it hit me. AI could take our personal computers from search to research. I realize this is futuristic, so please read it that way.

The basic idea is very simple. Right now, when you want to know about something, you start by doing a search. You get a bunch of snippets and links to likely documents. So you go to the best-looking ones and read. If your question is fairly broad you might read quite a bit by way of piecing together an answer. Your computer does the search then you do the research.

Suppose AI does the research and reports back to you with your answer? This seems perfectly possible; in fact, it is what these new AI engines seem to do best.

When you think about it, we do a lot of research.

Shopping likely tops the list. So a query might be to find me the cheapest something within a specified distance. A specific car within 300 miles, say, or a special pizza within 5 miles. We do that a lot. Or the best price in all the online outlets, which would take me endless time.

Then there is looking over time. Most stores have sales off and on. For example, Amazon prices for a given item can change a lot over time, including being briefly what looks like erroneously low.

Your AI shopper could effortlessly spend what would be all your time spotting sales. You could even have low ball buy orders just like automated stock buying software does today, but with much more complex instructions from you.

Then, too there is reading product or service reviews and comments. Online reviews are a wonderful addition to shopping, but they are time-consuming to read and digest. AI could do this, but maybe not today’s systems. The problem is that people disagree. Some like it, some do not, and everybody has reasons.

Which brings us to a key issue, namely how does AI handle disagreement? Much of our daily research involves assessing conflicting opinions, but I have yet to see a discourse AI product that can do this well, or even at all.

Much of what we want to know about is controversial. Products, services, politics and policy, sports, diets and health, child rearing and education, news, science, law and regulation, etc. Today’s AI systems seem to take positions based on their training rather than analyzing the issues for us. This is a serious weakness.

Maybe issue analysis will be the next big thing in AI. It may take new programming or, training, or both because issues have a distinct structure. See my “The structure of complex issues” here: https://www.cfact.org/2020/11/17/the-structure-of-complex-issues/. It could also track issues as they evolve over time.

Getting back to the basic point, the WorldWideWeb just turned 30 years old. It has made search and research a key part of daily life. The computer does the search, and we do the research on those search results. There are two kinds of research — fact-finding and issue analysis.

If AI can do some of that research, it could be a major change in our lives. However, that change will be electricity intensive. Some projections of AI power usage are hard to believe as they range up to a whopping 20% of American electricity consumption.

I have not looked at these AI growth models, so have no idea how realistic they are. It is likely they are way too fast. A technology is more than stuff; it is people using stuff, which takes a long time to develop. If AI can do some of our research, it will definitely grow big.

If AI can do our basic research, that would be extremely useful. It would free us up to do other things, like thinking and more advanced research, or even relaxing a bit. Stay tuned.

Get notified when a new post is published.
Subscribe today!
5 6 votes
Article Rating
81 Comments
Inline Feedbacks
View all comments
Corrigenda
September 27, 2024 2:13 am

AI should indeed be used to ‘test’ preparatory aspects of any such studies – but all such AI systems need to be properly certificated to ensure that the faulted science and the data manipulation we see in scientific papers today has been eliminated.

David Wojick
Reply to  Corrigenda
September 27, 2024 2:59 am

Good idea if possible but way advanced from the simple stuff i am talking about.

Bryan A
Reply to  David Wojick
September 27, 2024 9:46 pm

AI could definitely be “Trained or Tuned” to return results based on the Party Line or Party Preferred data sources

Reply to  Corrigenda
September 27, 2024 4:02 am

Totally agree. In other words, the information must be factual, credible and devoid of any “woke,” opinion, bias or religion.

David Wojick
Reply to  George T
September 27, 2024 6:13 am

But for now stick to questions not relatively subject to bias. Shopping?

Reply to  David Wojick
September 27, 2024 8:12 am

Will businesses be able to influence or manipulate AI?

Google was supposed to be a simple search facility, but today no self respecting climate sceptic would go near it as we know the algorithms are manipulated.

David Wojick
Reply to  HotScot
September 27, 2024 9:15 am

I use Google all the time and find it very useful. It now often takes me to the exact sentence I need. Amazing.

Dave Fair
Reply to  David Wojick
September 27, 2024 10:56 am

Use Duck Duck Go.

Reply to  David Wojick
September 27, 2024 3:26 pm

That wasn’t my question. And we know the Google algorithm is manipulated.

Fran
Reply to  Corrigenda
September 27, 2024 10:24 am

When assessing scientific literature, the conclusions of the authors are the least important part. You have to hone in on the Method and Results section in anything important. Moreover, peripheral things that come up when you are trying to figure out the search terms often turn out to be relevant.

I cannot believe that AI can do any of this. Jordan Peterson has said AI is useful for finding citations (some being fictitious) but academic search engines do this too, with the added advantage that you can go straight to the actual paper.

There is an additional problem with AI. It will inevitably concentrate on “consensus”. In the first half of my career, I was on the “outside” , publishing in second line journals and ignored by the “big boys”. Now, 12 years after retirement I still get 1-2 citations every week.

September 27, 2024 2:20 am

AI is an automated process, which unfortunately is influenced by the data set it’s been trained with, as well as the mind-set of the person doing the programming.
It has already been demonstrated that current AI’s from big Tech are all tainted with left-leaning bias. I would never let it get my information for me.

David Wojick
Reply to  Eric Vieira
September 27, 2024 3:03 am

A lot of simple questions are not political. Those will do for a start. Then as I describe it, issue analysis might reduce bias.

The Dark Lord
Reply to  David Wojick
September 27, 2024 8:03 am

I would suggest in this day and age ESPECIALLY regarding AGW everything has become political … especially supposed “scientists” … and the reasons are as old as mankind … the desire for power, control or money …

David Wojick
Reply to  The Dark Lord
September 27, 2024 9:32 am

AGW is just a tiny piece of all the things people are thinking about. Lots of topics people do searches on have no political aspect. That is why I chose shopping as an example. I spend a lot of time finding the best price on dog food because it changes at every source all the time and I buy a lot.

It seems like the people on this list are too into AGW to see what I am saying.

Reply to  Eric Vieira
September 27, 2024 4:08 am

Google sent an email encouraging me to try-out Gemini AI. I said “no thanks.” I essentially told Google I cannot “trust” Google or Gemini AI to provide reliable, credible or factual information. The Gemini AI roll-out proved to me that Google cannot be trusted to present historical information properly.

David Wojick
Reply to  George T
September 27, 2024 4:53 am

Interestingly Google’s regular search already has a lot of AI. Suggesting related questions for example.

Sparta Nova 4
Reply to  David Wojick
September 27, 2024 8:20 am

Items I once found using Google no longer appear in searches.

There was a UN Environmental official in the mid 1970s that said (global cooling context) that it was unclear if CO2 was at the heart of it but CO2 was something that could be quantified and taxed.

I deleted my bookmark (bad mistake) and can not dredge it up again.
Maybe I will try different search engines. Maybe not. Only so many hours in the day.

Reply to  Sparta Nova 4
September 27, 2024 9:12 am

Have you tried perplexity.ai?

David Wojick
Reply to  michel
September 27, 2024 9:24 am

A friend sent me a telling perplexity story. He ask perplexity how to get an unbiased answer and it gave him a good looking list of specifications that he should input with his question. But when he then included those specs with a question perplexity said it could not do that. It had to decide what was right.

David Wojick
Reply to  Sparta Nova 4
September 27, 2024 9:20 am

Yes as I discuss in the article, in highly controversial areas bias is a big problem because the systems take sides when they should do issue analysis. I pointed this out for Google searches in climate change several years ago.

Fran
Reply to  David Wojick
September 27, 2024 10:27 am

Even for shopping AI is subject to manipulation and bias.

David Wojick
Reply to  Fran
September 27, 2024 1:52 pm

Example?

c1ue
Reply to  Eric Vieira
September 27, 2024 12:11 pm

AI is not an automated process. Automated processes have predictable outputs given known inputs. AI is completely opaque on inputs and unreliable on outputs.

Reply to  Eric Vieira
September 27, 2024 4:08 pm

I have to admit, the search-engine AI is far better than the Google/Bing search engine results that distort, propagandize, re-prioritize, and suppress results.

strativarius
September 27, 2024 2:39 am

It would free us up …

We’ve heard this song before

Get your ticket to that wheel in space
While there’s time
The fix is in
You’ll be a witness to that game of chance in the sky
You know we’ve got to win
Here at home we’ll play in the city
Powered by the sun
Perfect weather for a streamlined world
There’ll be spandex jackets one for everyone

IGY – Donald Fagan

David Wojick
Reply to  strativarius
September 27, 2024 9:26 am

Well I no longer have to carry water from the well or walk to the store. Sometimes technology works.

Bryan A
Reply to  David Wojick
September 27, 2024 11:09 pm

Never trust any intelligence if you can’t see where it keeps its brain

strativarius
September 27, 2024 2:40 am

Modded again

Has anyone considered a better way?

David Wojick
Reply to  strativarius
September 27, 2024 3:05 am

Sorry I do not understand the question.

strativarius
Reply to  David Wojick
September 27, 2024 3:33 am

[Comment] Moderated… again

I hope that’s cleared that up.

September 27, 2024 3:38 am

I am being presented with AI results at the top of any searches, tried reading it the first few times and its superficial non-answers have put me right off. Its still at toddler stage.

Tom Johnson
Reply to  kommando828
September 27, 2024 4:21 am

AI searches have been useless for me. I’m writing this from the Edge browser since the latest update disabled my previous browser and I haven’t reloaded it yet. I switched browsers solely because the AI searches in Edge were useless. For the most part they gave results that contained incorrect information on topics where I knew the correct answer. I therefore didn’t trust the ones where I didn’t know the answer. Searches previous to AI would give multiple pages of sites, and very often the ones I trusted were a number of pages past the first page. They’re no longer found with AI. To me, AI is a total waste of time. I particularly hate the fake slow typing out of the results, which also takes time.

Sparta Nova 4
Reply to  Tom Johnson
September 27, 2024 8:21 am

It is exceptionally good at pattern recognition and therefore is exceptionally good at censorship.

UK-Weather Lass
September 27, 2024 3:38 am

The simple PC has been around as long as AI has and yet AI has never really got off the ground for most of us. Even AI triage for medical purposes is probably faster and better done by a trained professional than by computer and we knew that a very long time ago..

I have mentioned instant subtitling before which is an AI task and has been around for decades but still fails far too easily because spoken language is really tough to apply logic to, there being far too many traps to fall into, and once trapped the whole task collapses into farce very quickly..

If nothing important has recently changed in computing as of now then AI is bound to disappoint once more in spite of all the hype and nudges that a proper recent discovery might receive. The hype is just that and people need to think very carefully why it has suddenly become a must have plaything. Been there, done it, and been disappointed many times. .

We urgently need a new logic machine design and that means new boards, new chips, new instruction sets, and new programming languages and an attempt to crack the problem of randomness – we humans just can’t do randomness and neither can our machines except in ways we don’t want (e.g exploding EV batteries, the weather, life and accidents generally). Mother Nature holds the secret to randomness but she isn’t going to make it easy for us to find.

A new age is what the industry should be working to achieve if it has any sense because by 2050 PC’s will be distraught and disgusted with us and the data storage issue will be beyond manageable let alone the energy usage issues which will be too huge to contemplate.

strativarius
Reply to  UK-Weather Lass
September 27, 2024 3:52 am

“”AI has never really got off the ground for most of us.””

I would posit that for a great many companies and their customer service departments AI has been a godsend. Its become almost like telephone button-pressing showjumping; going round the fences and other obstacles until you reach… the less than helpful human.  

Rather than help people these systems seem designed to provide a labyrinth of options and choices that actually deters them from even bothering. Somewhat like complaining to the BBC.

In this case it’s Artificial Irritation.

Reply to  UK-Weather Lass
September 27, 2024 4:34 am

Computers got faster, that certainly helps. I agree though, there is a lot of unfounded hype.

There being no randomness in the Universe, we won’t get any in our machines.

strativarius
Reply to  MyUsername
September 27, 2024 5:09 am

“”There being no randomness in the Universe””

You seem rather certain.

The Importance of Randomness in the Universe: Superdeterminism and Free Will
https://link.springer.com/article/10.1007/s10516-020-09490-y

Mr.
Reply to  MyUsername
September 27, 2024 1:28 pm

No “randomness”?

Not even in
“coupled, non-linear, chaotic systems”

Bryan A
Reply to  Mr.
September 27, 2024 11:11 pm

Nah, that’s just weather

Sparta Nova 4
Reply to  UK-Weather Lass
September 27, 2024 8:23 am

The quality aspect of most AI systems is a very good human language interface.
Behind that, it is still a weighted decision tree, with the ability to alter the weighing scheme.

Reply to  Sparta Nova 4
September 27, 2024 8:47 am

Yes, a neural network like our brain, and the question is how big can you make it, and how fast can you run it.

David Wojick
Reply to  MyUsername
September 27, 2024 9:36 am

Fortunately the brain is not a neural network when it comes to thought. Neural nets just do statistical pattern analysis.

Reply to  David Wojick
September 27, 2024 9:57 am

Where else would they come from?

David Wojick
Reply to  MyUsername
September 27, 2024 1:56 pm

There is a whole lot more to the brain than synapses firing. We have no idea how thought occurs but it is nothing like neural net computers as they are extremely simple. Plus what they are doing is up to our interpretation. Math has no meaning in itself.

Bryan A
Reply to  MyUsername
September 27, 2024 11:12 pm

No, the real question is who gets to tell it what is and isn’t truth.

Reply to  UK-Weather Lass
September 27, 2024 12:31 pm

Many years ago, there was a joke that NASA went to the Moon because the computers here on Earth had run out of data to analyze.

September 27, 2024 4:52 am

Not related, apologies but I saw this and thought of WUWT. Has anyone considered this situation?

IMG_3876
Sparta Nova 4
Reply to  JohnC
September 27, 2024 8:24 am

Very good that you discovered this.

Reply to  JohnC
September 27, 2024 12:34 pm

Probably why no one should try to use an EV to outrun a hurricane.

Bryan A
Reply to  Retired_Engineer_Jim
September 27, 2024 11:15 pm

With recharge times and presuming EVERYONE had an EV, you’d need to begin evacuating a week or two in advance. And by then the track could have changed.

Reply to  Retired_Engineer_Jim
September 28, 2024 8:01 am

I’m in the U.K. so hurricanes aren’t a problem, but substantial rain fall is. The real problem is if one recharging point has a fault (they’re run from 440volt three phase supply, i imagine each group of chargers is run on one of the phases from what I recall from the system at my local supermarket) then could it be sufficient to electrify the water? Also the water being in contact with the chassis, is that a problem? We’ve had flooding on roads, but there haven’t been any reports of any EV fires.
of course, if the situation in the picture does cause serious problems then the electrical chargers could be used to create hydrogen!

Bryan A
Reply to  JohnC
September 27, 2024 11:13 pm

Oooooo Shocking!

dk_
September 27, 2024 7:12 am

Hey Google, what is a Confidence Game?

Bryan A
Reply to  dk_
September 27, 2024 11:16 pm

Google:
Confidence Game – see Con Game aka Climate Science

mleskovarsocalrrcom
September 27, 2024 7:37 am

AI has been in use since phone tree responses. The sophistication has increased along with the response time but it’s still the same concept. How many times have you tried different paths on a phone tree only to never get your answer and just hung up in disgust? Try different AI engines with different data vaults and you don’t always get the same answer. AI is valuable but over hyped and trusted too much.

David Wojick
Reply to  mleskovarsocalrrcom
September 27, 2024 9:39 am

Agreed on over hyped but no data on how much trusted. Note that biased sources are the rule with humans too. See my https://www.cfact.org/2024/03/16/ai-chat-bots-are-automated-wikipedias-warts-and-all/

John Johns
September 27, 2024 7:40 am

My friend from HS paid his way thru college working as a disc jockey. He now runs 3 internet radio stations featuring Top 40 tunes from the ’50s to the ”70s. A recurring voice over introduction to a tune is, ” Reached No 2 but, for whatever reason, never made it to No 1″. I had a few ideas, but, tracking as many as 100 tunes over 52 weeks made me think of AI as a research tool.

I asked ChatGPT the following:

What songs were never higher than #2 on the Billboard Top 100 in 1958

Response (partial year only)

In 1958, several songs peaked at #2 on the Billboard Top 100 but never reached #1.
Here are some notable tracks that stayed at #2 without reaching the top spot:
1. **”Great Balls of Fire”** – Jerry Lee Lewis
– Peaked at #2 behind “At the Hop” by Danny & the Juniors.
2. **”Peggy Sue”** – Buddy Holly
– Reached #2 but was blocked from the top spot.
3. **”Sweet Little Sixteen”** – Chuck Berry
– Peaked at #2, unable to surpass Elvis Presley’s “Don’t.”
4. **”Lollipop”** – The Chordettes
– Also peaked at #2, blocked by “Tequila” by The Champs.
5. **”Witch Doctor”** – David Seville
– Held the #2 position behind “All I Have to Do Is Dream” by The Everly Brothers.
6. **”Rockin’ Robin”** – Bobby Day
– Reached #2, unable to dethrone other hits during its time.
These songs were immensely popular but didn’t manage to reach #1 due to the strong
competition from other big hits during 1958.

This query is an area where there are no disagreements. An area where I have some familiarity (being of the Class of ’58). BTW. “Witch Doctor” was “Our Song”, chosen for some unknowable reason, by my HS steady. She was “Witch Doctor” while I was “All I Have to Do Is Dream”.

Given a little work on my part, I hope to put that trope to rest.

David Wojick
Reply to  John Johns
September 27, 2024 10:19 am

A fine example that would take a long time by hand. You have to check every #2 to see if it made #1. A physicist sent me the example of a math procedure he cannot use because it is too laborious so he asked a chat bot and got the answer as fast as he could type.

The Dark Lord
September 27, 2024 8:08 am

Artificial Intelligence is just that … an artifice of intelligence …

ar·ti·fice
/ˈärdəfəs/

noun

  1. clever or cunning devices or expedients, especially as used to trick or deceive others.

which is correct …

Its FAKE intelligence … always has been always will be … pattern recognition is not intelligence …

David Wojick
Reply to  The Dark Lord
September 27, 2024 10:26 am

It is a common fallacy to think artificial intelligence tries to create intelligent systems. The idea is simply to emulate some aspect of human behavior and the chat bots emulate basic reading very well. Amazingly well in fact, hence the hype.

But as I say at the beginning of the article, the question remains what is this new ability good for? These grand growth projections are being made without first answering that question. If anyone else has tried to answer it I would love to see that.

Bryan A
Reply to  The Dark Lord
September 27, 2024 11:21 pm

And yet, pattern recognition is the basis of most Intelligence Quotient tests

Sparta Nova 4
September 27, 2024 8:17 am

If the preponderance of reports AI views are duplications and repetitions of other reports, much the same as news media (and social media), can you get a balanced assessment that ignores the preponderance of the evidence?

David Wojick
Reply to  Sparta Nova 4
September 27, 2024 10:40 am

Not sure I understand your question but creating a truly balanced assessment of a controversy is a very difficult design problem. Should every nutty idea be given equal weight with what is widely accepted? If not then how is “widely” measured?

Plus given that a complex issue is a tree structure how do you present that as the linear string of sentences we call text? The number of statements and counter statements increases exponentially with level of detail, so you cannot include them all. Whose side do you stop on?

If the branching rate is 3 then the tenth level has almost 60,000 statements, the tree as a whole about 100,000. Climate change is easily that big. But at least a bot can read them all.

Reply to  David Wojick
September 27, 2024 12:39 pm

If the folks who assembled the material from which the AI “learns” have their thumbs on the scale, then there is no hope for the AI to give a balanced response.

David Wojick
Reply to  Retired_Engineer_Jim
September 27, 2024 2:01 pm

Indeed that is the core problem. NYT is rated true and Brietbart false. But they are not designing for balance. Someday someone will.

Reply to  Sparta Nova 4
September 27, 2024 12:56 pm

You can get a reasonable idea of the answer by going to perplexity.ai and asking it a question about any currently controversial topic. When you get the initial answer, argue with it. Its best to argue on logical grounds, not simply make contrary assertions. Find some contradiction in what its saying. Or you can cite some evidence to the contrary of what its saying and ask how it reconciles that with its position.

What you see is generally that it starts off with a completely received wisdom account, but then gradually starts admitting the validity of at least some counter arguments.

On some other topics however its much more reliable. Ask it about the Riemann Hypothesis and Zeta function, and follow up with its suggested questions or ask your own, and you will get a pretty decent tutorial. Same if you ask about (eg) the fundamental theorem of calculus.

Ask it about technical recipes, how to write a shell script that does something, and its very good.

You still have to be wary, and it makes really laughable errors from time to time. Ask it about G H Hardy (the mathematician) and you are liable to get weird stuff about other Hardys without it seeming to realize they are different people.

This is completely different from the other kind of AI, which you can see at a sort of peak in the Alpha Zero games. There is a book: Game Changer: AlphaZero’s Groundbreaking Chess Strategies and the Promise of AI. Or some games are available online. The book has the benefit of grandmaster commentary. Some of Alpha Zero’s games are absolutely breathtaking. People here have argued that all we are seeing here advanced technology looking magical. It doesn’t feel that way to me (but it wouldn’t!). But what we can be sure it is not, it is not simply rehashing existing material. Because that is not how it was trained, and anyway, chess is too computationally large for that to work.
It feels to me like something indistinguishable from very high level thought.

Mr.
Reply to  michel
September 27, 2024 1:36 pm

I’ve tried those argument tactics on my wife.

All I usually get in response is “the look”.

Maybe I should try using an AI barrister instead?

September 27, 2024 11:16 am

If AI can do our basic research, that would be extremely useful. It would free us up to do other things, like thinking and more advanced research, or even relaxing a bit. Stay tuned.

If AI gets to the point where it can do a “mix two known chemicals in a flask and test for pH using litmus paper” test then I for one will be very impressed. At this time though, I wouldn’t trust it to be able to tell the difference between apple juice and piss, so I’m not having it round for breakfast.

Mr.
Reply to  PariahDog
September 27, 2024 1:38 pm

It also might answer –

“Sorry, I’m not at the lab today.
Working from home.”

c1ue
September 27, 2024 11:54 am

Anyone who thinks AI is the least bit reliable, is gullible.
Read Ed Zitron’s many articles at wheresyoured.at
AI is clearly garbage.
Traditional computing only exhibited garbage when garbage was input i.e. GIGO.
AI is GO – garbage out at any time for any reason.
Do you really want to rely on AI for basic research when it literally makes up reference cases in a lawsuit (see Avianca), creates new chess pieces when playing chess, invents new code when creating programming? These are just a few of the examples that have come to light.
https://www.law.com/newyorklawjournal/2023/12/21/legal-fictions-and-chatgpt-hallucinations-mata-v-avianca-and-generative-ai-in-the-courts/?slreturn=20240927144842
https://x.com/sashayanshin/status/1834361567640494186
https://x.com/meTheKarthik/status/1834561878820786579
Then there’s the elephant in the room: AI is completely opaque. How do you know if the garbage results are due to the AI or due to the AI company’s machinations? Just like Google manipulates search to sell advertising, it is guaranteed that AI company’s are going to manipulate their results for their own profit – especially since the 2 “leading” companies in AI are going to lose in the order of $7.5 billion (with a B) dollars just this year.
The Big Tech companies are excited because AI is so fricking expensive that only they can possibly afford to train models, and so they see a grand new vista of monopoly unfolding but the actual capabilities are garbage.

c1ue
Reply to  c1ue
September 27, 2024 12:16 pm

At the RSA conference earlier this year, I was sitting in a sponsored lunch where the head of one of the consulting companies was talking about their product but segued into AI. He gushed about how an AI algo he “created” was able to summarize a 50 page research paper accurately.
I asked him, afterwards and privately, if he had actually read the entire 50 page paper? He said no. So I then asked how does he know that the paper was summarized accurately? Hmm
This even ignores that all research papers have executive summaries. Why do you need an AI to read and create a summary, when the authors have already done so?
Ultimately, people are fooled because the LLMs (which is what the so-called AI really is) are able to output grammatically correct sentences.
Or in other words, AI/LLMs are well spoken morons. So I guess he good news is: news reporters and spokesmodels jobs can be automated.

David Wojick
Reply to  c1ue
September 27, 2024 2:06 pm

I hang out over in the research publishing community and the accurate summarizing of science papers is well established.

David Wojick
Reply to  David Wojick
September 27, 2024 2:27 pm

They are even trying to figure out how to use LLM AI in perr review:
https://scholarlykitchen.sspnet.org/2024/09/24/guest-post-is-ai-the-answer-to-peer-review-problems-or-the-problem-itself/

I used to blog for the Scholarly Kitchen (Society for scholarly publishing) until the kicked me out for being a right winger.

c1ue
Reply to  David Wojick
September 28, 2024 4:00 am

This should fill you with dread.
I hang out in Silicon Valley – which is filled with failed nonsense hyped as real. See: self driving cars, the ride share model, meta artificial reality, Google Glasses, etc etc.

c1ue
Reply to  David Wojick
September 28, 2024 3:59 am

Over and above the executive summary? Which is always present? LOL

David Wojick
Reply to  c1ue
September 27, 2024 2:04 pm

The flaws are well known but I think minor when doing simple text search research, which is what I am talking about.

David Wojick
Reply to  David Wojick
September 27, 2024 2:33 pm

There is a huge amount of research into LLM AI. In 2024 alone there have been over 6,000 journal articles just on Chatgpt:
https://scholar.google.com/scholar?as_vis=1&q=allintitle:+chatgpt&hl=en&as_sdt=0,49&as_ylo=2024

Far more than any human can read so maybe Chat can tell us what is going on with it. That is an obvious use of these systems, reading what we cannot.

c1ue
Reply to  David Wojick
September 28, 2024 3:57 am

I am amazed that you would even dare to try and brandish volume of research as any form of validity, given the volume of research on the danger of CO2 and climate change.

c1ue
Reply to  David Wojick
September 28, 2024 4:24 am

The obvious use of these systems is to burn money.
Again, you have clearly not read any of Zitron’s work.
OpenAI and Anthropic – the 2 leading AI companies – will burn over $7.5 billion in cash just in 2024.
This is despite being subsidized in the order of 60%+ by their respective Big Tech cloud sponsors.
Now consider how expensive AI actually is to operate – this is not the paradigm of old where information stored on a web site is fractions of a penny to disseminate to each interested party.
Training AI models costs hundreds of millions to billions; operating the model costs orders of magnitude more than just shooting web site data across the internet.
Capital and operating costs must be recouped, at some point, along with a profit made.
The reality is that summarizing papers is a low value activity; Doctorow has noted this in his analysis of AI used to replace customer service etc etc.
Would you pay $600 a year to have this done for you? $1200? Because Microsoft is selling CoPilot “AI” subscriptions for $30 to $50 a month right now – and that’s certainly the “get you hooked” price as opposed to the one which returns MSFT margins to where they want them to be.
So even above the well proven fact that LLMs are garbage, there are very serious questions about whether this technology is even economic to operate.

c1ue
Reply to  David Wojick
September 28, 2024 3:56 am

You clearly have not read any of Ed Zitron’s articles.
Like how AI cannot say correctly how many states have letter ‘a’ in their names, or how AI could not tell how many letter ‘r’s are present in the word Strawberry.
The supposed accuracy is clearly mostly a function of plagiarism of existing web content as opposed to any actual understanding. There is extensive research on this as well showing that only topics that are “new” in the browser/web site age i.e. 1998 to 2018) are reasonably well handled. Before, bad and after, worse because the internet is increasingly being populated by AI garbage – which in turn feeds into AI learning sets – which gets propagated on and on coupled with “newness” leading to relatively little analysis outside of a few hot areas (like AI).

September 27, 2024 4:01 pm

“If AI can do our basic research, that would be extremely useful. It would free us up to do other things, like thinking and more advanced research, or even relaxing a bit.”

This explains why most people no longer think and our youth aren’t learning to think. Leave it to the governments and Tech giants for their psychological warfare against 80% of the current non-thinking population to explain this kind of conundrum.