Essay by Eric Worrall
Soros leading the pivot from the fake climate crisis to the fake AI crisis.
AI is at the root of the world’s ‘polycrisis’
Climate change and the war on Ukraine are also threatening democracy, according to billionaire and philanthropist George Soros.
George Soros
We are living in troubled times. Too much is happening too fast. People are confused. Columbia University economic historian Adam Tooze has, indeed, popularised a word for it. He calls it a “polycrisis”.
The polycrisis has many sources. In my opinion the main source of the polycrisis afflicting the world today is artificial intelligence. Climate change comes second, and the Russian invasion of Ukraine qualifies as the third. The list is much longer but I’ll focus on these three. That should help reduce the confusion.
Artificial Intelligence
…
Shortly thereafter, Geoffrey Hinton, who is generally considered the godfather of AI, resigned from Google so he could speak openly about the risks posed by the new technology. Reversing his previous position, he took a very dim view of AI. He said it could destroy our civilisation.
…
What Hinton said made a big impression on me. Indeed, AI reminded me of Goethe’s poem The Sorcerer’s Apprentice. The apprentice is studying magic but doesn’t fully understand what the master is teaching him. When the master orders him to sweep the floor, he applies the magic words to a broom. The broom obeys him, but the apprentice can’t stop the broom from fetching buckets of water to sweep the floor and the house gets flooded.
…
Read more (paywalled): https://www.afr.com/technology/ai-is-at-the-root-of-the-world-s-polycrisis-20230612-p5dfui
I predicted this pivot back in 2017. The political utility of the fake climate crisis is all but spent, climate crisis rhetoric these days mostly only works on lefties who already planned to vote socialist. But AI scare stories have bipartisan appeal. And we’ve already been well primed for the fake AI crisis, by science fiction / horror stories like “The Terminator“.
I’m a software developer of over 30 years experience, who has personally written AIs. I’m not a “father of AI”, but I know a thing or two. I disagree with the AI apocalypse narrative.
The Sorcerer’s Apprentice is a powerful metaphor, but in my opinion it is the wrong metaphor for rise of AI. A better metaphor is the arms race between computer virus creators and anti-virus companies.
Every so often a virus writer manages to slip in some blows, but our computers are mostly safe. Install the anti-virus, keep the subscription and software patches up to date, and you can go about your daily business.
In a similar way, whatever capabilities AI gives to miscreants, or whatever aberrant capabilities AI develops on its own, it will be treated like yet another computer virus, to be countered by AIs primed to monitor and respond to threats.
Is it possible someone could leap ahead and develop unique capabilities, or that an AI could develop a novel form of attack? Of course it is – look how Microsoft’s ChatGPT integration caught Google flat footed, an example Soros mentioned elsewhere in his article.
But who thinks this setback will be the ruin of Google, and all the other AI companies? By the end of this year the world will be awash with ChatGPT lookalikes, created by tech companies spending 10s of billions, whatever it takes, to stay in the race.
AI will create tremendous changes to society, some marvellous and others which many of us will find deeply disturbing. On the positive side, there will be glorious advances in medical science. Advances like medical immortality and cures for currently intractable diseases are almost within reach.
But AI will also bring challenging disruptions. Among other things, I foresee a future AI version of today’s schoolkid “trans” conflict, with progressive politicians demanding kids should be allowed to augment their brains and bodies with AI implants without parental consent, to avoid the trauma of feeling inferior to their augmented classmates.
But we’ll get through all that and more. Many of us alive today may live to see an age of marvels – an age of fulfilment and joy which today’s world can only barely glimpse.
In the meantime, we need to stand firm against this pivot to AI scare stories, just as we stood against climate scare stories and Covid lockdown scares. Because at the base of the AI fear campaign will be the very same people who are currently at the base of the climate crisis movement: meddlesome fools who believe they know how to run our lives better than we do.
Biggest threat is not AI or Climate Crisis, but anyone named Soros. If the good die young, this POS is immortal
“On the positive side, there will be glorious advances in medical science. Advances like medical immortality”
Glorious?
I would expect Mikeyj’s favorite billionaire to buy medical immortality first. Thankfully my generation might not make it.
Talk about a moral dilemma: How does a universal government healthcare administrator decide who gets the “live forever” pill and who doesn’t.
Worse, I suspect this medical “immortality” will depend on an endless supply of spare parts.
Guess who gets to be the recipient and who gets to be the donor.
https://amblin.com/movie/the-island/
“I have been dead for billions of years and not suffered the slightest inconvenience” (Paraphrasing Twain)
At the rate these lunatics are promoting the end of the world by one means or another, death will be a welcome release from them.
Afterthought: A WUWT meet up with wine, wimmun and song in the afterlife. I’m looking forward to that one.
Make sure the invitation cards define “what is a wimmun” 😱
Not a man.
What if the evil ‘geniuses’ who work in the AI sphere are no more intelligent than the consensus climate dolts with asterisksed PhDs who never made a successful prediction in 40yr careers, and yet keep doubling down on numbiness?
They’ll first give it to the LGBT+ folks first, women next, white men last.
There are no men or women, only fluidity.
I self identify as a handicapped, black, lesbian, woman
Wot, no “minority” religion as well?
(Labels can be confusing – I was once called a bigot, but what I heard was “big-gut”.
Consequently, I was not offended, as I should have been 🙁
“I self identify as a handicapped, black, lesbian, woman”
You do make things up Mark… but I’ll accept “handicapped.”
My own personal troll. Aren’t you proud of yourself.
The only time Simone posts is to snipe at me.
Then again, it’s not like he’s ever had anything intelligent to say.
“The only time Simone posts is to snipe at me.”
Nope, as always you flatter yourself…. I post on other stuff. It is just you are such an easy target, because you make a lot of stuff up.
Just because you don’t want something to be true, doesn’t make it not true.
I guess it is possible that maybe 1 out of 1000 of your posts aren’t in response to something I said. However I am flattered that you go so far out of your way to follow my career.
Too bad you have no successes of your own.
“Too bad you have no successes of your own.”
But of course you don’t know that, so true to form….. just making it up.
I use the clues that you provide.
Not to mention your long history of being able to use even simple logic or your inability to comprehend any reality known on this planet.
wow, a four-fer, with that qualification, I can guarantee that you’d get any job you want in the state government of Woke-achusetts and maybe the Biden administration- now, be sure to add vet and become a five-fer- with that, you could get a top job in any agency
The guberment administrator gives the pill to themselves, and whichever politicians fund the guberment administrators pension. … obviously.
social credit score
And Klaus and Bill and Jeff – billionaires with self serving agendas are the worlds greatest threat
What is the world’s greatest threat then?
We are persistently assured it’s humans, so let’s boil it down to some candidates.
The Evil One on the left is the scary face….but AI requires energy to operate and man can turn off the power, no? HAL did foil Dave in a Space Odyssey when it refused to open the entrance hatch…..”I’m sorry, Dave, but I can’t do that”
That was HAL (IBM geddit?) 9000.
We are way way ond that : https://www.rankred.com/quantum-processors/
A bigger problem is a tax code that allows oligarchs to preach chaos in place of paying taxes. Better suck up to Dems to keep the wealth safe.
Computers were ALWAYS “artificial intelligence” so why the new moniker? Just to scare the masses, and to grab power and control by leftists.
The new moniker means its a race between replacement and retirement for over-40’s. I sympathize a little with Oppenheimer. Someone was going to build it anyway?
Even by ‘artificial’ standards, computers are not intelligent.
I worked as a computer engineer for decades, they are very different.
There is a very, -very-, large gap between a conventional computer from before 2010 and a current generation generative AI, or the forthcoming real human++ AI.
Someone wrote that the danger of AI is when huge systems integrate in a hierarchy.
That has already started :
Palantir AIP | Defense and Military
https://www.youtube.com/watch?v=XEM5qz__HOU
This AIP, integrated with OpenAI and Bard, from billionaire Thiel, is ready to both generate and conduct military operations.
Palantir CEO Karp already in Feb told WaPo that AIP was already used in Ukraine.
Obvious question – what if a Chatbot gave firing orders and the Pentagon only found out later?
This is likely why Musk and others call for a moratorium! There is a Congress bill to block AI and nukes.
AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google
https://www.bbc.com/news/world-us-canada-65452940
Curiously, this soon after Palantir CEO Karp said AIP was already in the Military.
Stephen Cohen: The Path to Palantir [Entire Talk]
https://www.youtube.com/watch?v=g_6_kGDdjwI
Thiel’s companies Palantir, Anduril, Mithril, Valar, … are from Tolkien, which he memorized.
Palantir CEO Karp is even more odd.
As everything about COVID points to defense bio-labs, it is odd that people miss the Pentagon’s Palantir’s open praise of its platform ready to generate and operate in the war theater.
Unknown whether Soros knows more about this from fellow billionaire Thiel?
Just in It Can’t be Done…or Can It? | HCA Healthcare at Palantir AIPCon
https://www.youtube.com/watch?v=_P59r–r2uM
Just expect Palantir AI to run the next pandemic response!
What’s more dangerous, nukes or viruses? It only takes one decision from one person with super-user status.
You mean Biden?
Now I can relax!
“You mean Biden?”
Totally artificial, and of ZERO intelligence.
At least ChatGPT can string a coherent sentence together.
Who the hell cares? If nuclear war happens we are all toast within a millisecond and won’t know the first thing about it.
The elite will be left with a few attendant serfs in a world contaminated by fallout and, unless they want to survive for the rest of their miserable lives ruling over some kind of underground bunker cave dwellers eating dried food for the rest of their lives, they will never push the button.
Tactical is the worst we can expect (bad enough) but if one is ever deployed the world will go apeshit and the perpetrators hung from the nearest lamp post.
I’ll bring the rope.
There’s a screenplay in this post HotScot.
🤣
There has already been quite a few nukes detonated, two in war.
We’re still here.
You are a fool if you believe that. If you are in right part of a large city, you may get fried. More likely you die of radiation sicknes or the murderous anarchy which emerges when the only food source is other human beings.
Any survivors will envy the fried.
Sight-seeing the fabulous countryside of Nth America above the 60th parallel.
Went for a walk through a forest trail, checked with the Park Ranger about the currently posted encounters with grizzlies.
He said there’s a few about but still safer than a subway trip in New York City.
Ain’t that the truth ☹️
There is a huge difference between generating and conducting war, based on algorithm’s developed from chess competitions, and real warfare with the attendant illogical and irrational decisions made by an opponent.
Then there’s blind luck.
AI: “Fire the superweapon howitzer!!!!!!”
Commander: “Eh….Sorry Mr. AI, the armourer is off sick today.”
AI: “Call up ze next one!!!!!”
Commander: “Eh…..Sorry Mr. AI, your efficiency of personnel means his replacement is 300 miles away”.
AI: “Ve haf no armourer for ze superweapon howitzer?”.
Commander: “No sir, Mr. AI”
AI: “Aw sheet, I need to lie down for a while”.
Now hold on a second – you did not use the preffered AI pronoun!
SNAFU, Situation Normal All F*cked Up, Sir!, will just will not do!
Gen. Milley could get really hot under the collar.!
Geez, an AI commander with a speech impediment!
They’re really pushing the diversity hires in that army.
When the first atom bomb was created, there was some concern that it could ignite the whole atmosphere, extinguishing most life on Earth. Fortunately this did not occur. The Castle Bravo test, however, demonstrated that we could not rule out unanticipated effects. Again, we were fortunate that the only affect was an increased yield.
AI, almost by definition, will have unanticipated effects. Like nuclear technology, that genie is not going back into the bottle, but you have to be a fool to not have some concern for risks.
That being said, F George Soros and his spawn.
Absolutely, concern is rational. Paralising panic which leads to irrational decisions like trading freedom for the illusion of security, not so much.
Well stated, Eric.
Witness COVID “vaccines”.
Czara Bomba was scaled down to 30% – as Sakharov said there is no limit.
With 6000 now primed annihilation is now guaranteed.
Add into the MAD mix, Palantir AIP giving firing orders. This guy is not scared :
Pathetic doom mongering we have endured since the end of WW2.
Now politicians and media are just making sh1t up to add to it – climate change, AI, LGBTQ, WMD’s, covid, Russia,Russia, Russia, China, China, China. We’re even back to aliens FFS.
People need to effing grow up instead of living in perpetual infancy.
They are laughing at us folks……..
“They” being the majority laughing at the exceptional Golden Billion.
Poor Blinken ain’t thinkin’ any more.
Sensible approach.
Bar c
Code,
Let’s have a link to that claim of a burning atmosphere. Was it ever more than minor pop science fiction?
The scientific work for the atom bomb was done by some of the best intellects that the USA could identify. There was no recorded fear among them of burning air that I have seen.
There was, understandably, emotional stress within these scientists because the purpose of the bombs was mass killing of others. Most of the scientists did not have prior military training to harden their minds. The calculations of bomb yields were a large part of the effort and involved adaption of early, rudimentary IBM business computers to do large math calculations of physics.
The taskforce method of assembling the best remains available if needed to sort large problems with AI (however defined) . Geoff S
My (limited) understanding of things nuclear is that fission of suitable materials can be triggered and controlled, but fusion of candidate materials is a whole other undertaking – triggerable, but not yet controllable to any extent to make the process usable.
?
Really? You’ve never heard of this discussion?
https://blogs.scientificamerican.com/cross-check/bethe-teller-trinity-and-the-end-of-earth/
“So Teller said, “Well, how about the air? There’s nitrogen in the air, and you can have a nuclear reaction in which two nitrogen nuclei collide and become oxygen plus carbon, and in this process you set free a lot of energy. Couldn’t that happen?” And that caused great excitement.
Oppenheimer [soon to be appointed head of Los Alamos Laboratory] got quite excited and said, “That’s a terrible possibility,” and he went to his superior, who was Arthur Compton, the director of the Chicago Laboratory, and told him that.”
Searching on “manhattan project burning atmosphere” produces several results.
He knows how to make money off other peoples scaremongering
ChatGPT is heavily influenced by the choice of training material, but the current version has a nasty tendency to make up “sources” that are hallucinations.
Eugene Voloch, a law professor, has prompted ChatGPT to devise sources that libel him, with citations of court cases that never existed, and published sources that originated within the program itself. Voloch speculates as to who is liable for perpetuation of a libel.
There has already been one case of a lawyer who used ChatGPT to write a brief, and was sanctioned by the judge for the created references.
ChatGPT has much the same problems as using Wikipedia to do research. Unless one knows how to check when it is lying to you, it is treacherous.
Indeed. Software developers are also at risk. ChatGPT can generate plausible looking code, but it’s frequently wildly defective. Even funnier when you ask it to fix the mistakes, and it introduces new mistakes.
Useful but treacherous.
Sounds like some human software developers I’ve met over the years
It is a wonder we survived!
They put these guys to use where Ed Snowden told us all!
Whereas exhaustive pre-release testing of new software versions used to be more than adequately resourced and completed, nowadays it seems that the subscribers & users are the live test dummies.
Government “services” websites are notoriously dysfunctional, I suggest as a result of token pre-release testing.
The issue in my experience is government programmers have probably worked the same job since graduation. Changing jobs a few times really opens your eyes to what works and what doesn’t, and helps you develop subject depth and expertise.
Back in my day (mainframes), every maintenance or enhancement version had to have a field-tested beta version before general release.
Our guiderail was the Hippocratic oath –
“first do no harm”
Microsoft must have had coding AI for decades.
And Red Hat with systemd 🙁
“a nasty tendency to make up “sources” that are hallucinations.”
The Voloch case if very interesting. Search my other comment here about its attempt to explain ENSO to me. Nothing but well phrased , plausible horse-crap which was 180 degrees wrong.
“Among other things, I foresee a future AI version of today’s schoolkid “trans” conflict, with progressive politicians demanding kids should be allowed to augment their brains and bodies with AI implants without parental consent, to avoid the trauma of feeling inferior to their augmented classmates.”
Those schoolkids will later have to decide on how much genetic engineering will be applied to their own kids. Do they want kids who rediscover Maxwell’s Equations at age 1 and run 100 yards faster than Usain Bolt at age 2? If so, why did we ban Steroids from pro sports? Yet, what if a large, philosophically divergent foreign culture goes 100-percent genius-baby? Those schoolkids might have no choice.
One of my favourite movies, Gattaca, deals with this issue.
It is entirely possible the Usain Bolt’s 2 year-old child probably outran him, especially when he had something in his mouth he shouldn’t have. 😉
LOL. WUWT needs emotes, not plusses and minuses.
As far as science is concerned, AI can only gather up information that already exists and output majority opinions. Actual discovery/progress usually involves someone making entirely novel connections and then pursuing them with real experiments.
No, AI is capable of discovering novel solutions. It’s a real experience playing Stockfish chess AI, among other things I learned knights are far more versatile and dangerous than I realised. Such learnings have really impacted my personal style of play.
AIs are useful for finding patterns.
It still takes a human to determine if any patterns found are real or spurious.
Back in 1971, I had a biology professor who did research in computer generated social patterns. He was finding all sorts of correlations between unrelated social phenomena. He was impressed with his own cleverness and with the profound truths his computer was uncovering. To me his computer-generated correlations looked like coincidence or like nonsense. For some strange reason I never heard that he was awarded the Nobel Prize.
That was the basis for the Movie Social Net.
Brains are useful for finding patterns.
It takes a human to decide whether a pattern is interesting…to them.
Most humans dont understand the difference between a correlation pattern and a causation pattern, though. Especially AGW alarmists.
Not AI. Did the “Chess AI” suddenly decide to start playing checkers? Or Parcheesi? It’s not “intelligent” if it only does what it’s told to do.
The master chess player Kasparov set himself up as a military expert.
Talk about the Peter Principle – promoted to his level of incompetence.
No promote AI to it’s level of incomptetence and we get Peter Principle II.
ChatGPT wasn’t specifically trained to play chess in the same way stockfish was, but it can play.
Artificial intelligence is a misnomer. It’s equivalent to climate models in that it can only produce solutions from data it is given and what to look for. It’s nothing more than a search engine (not to degrade a good utility). The danger in AI is who controls the data which in turn controls the solution.
AI is capable of discovering and testing novel solutions, if it is set up to do so.
That’s a very bold claim, Eric, since you haven’t qualified it with any limits. Are you suggesting that AI is capable of discovering and testing a novel solution to the problem of, say, humans being unable to travel faster than the speed of light, for example?
Unlikely, right now AIs are mostly inferior to humans. In the future who knows. But given FTL = Time Machine, it may never be discovered, because timelines which discover FTL inevitably cancel themselves be disrupting the sequence of events which led to the discovery of FTL.
AI training creates connections between facts that may not have been considered by humans yet. Multiple seemingly unrelated papers might actually combine to imply something novel and AI knows it and all it takes is the right series of questions to produce it.
Modern neural networks are definitely not search engines. The original facts are no longer present. They’re closely related to how our brains store “facts”. Facts in context.
In the near future, I’d expect the AI to feedback on itself which (IMO) is going to be a form of thinking. It may be doing that now in, as yet, unreleased models. When that happens, the “questions” will be more like a two way conversation.
Artificial intelligence is a misnomer.
Completely agree. Simulated Intelligence would be an accurate term. Current AI enthusiasts seem totally innocent of the history of psychologists trying to pin down a general intelligence factor “g.” This effort devolved into factor analysis trying to nail down the components of intelligence. Again, the AI enthusiasts seem innocent of this history.
There’s only one big problem the world faces – billionaires with warped Bond villain agendas
“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” –H.L. Mencken
Soros revealed.
He does look like a hob-goblin.
AGW revealed.
What a mess. Yet another hobgoblin. But the worst of hobgoblins are ones created to target the useful idiots while at the same time fostering their own credibility. The harsh reality? is that most people don’t think, they follow. They are easily herded. And that herd can trample us all.
A bigger threat
“”George Soros hands control of multibillion-dollar foundation to son””
https://www.theguardian.com/business/2023/jun/11/alexander-soros-given-control-of-father-george-soros-multi-billion-foundation
Yes, lets check Palantir AI and Pentagon money trails.
Soros himself managed the British Royals Quantum Fund at least until the LTCM crash bailout.
Another money trail – King Charles III ?
Yeah, Alexander Soroa says he would like to take money out of politics, but as long as the “other side” is spending money, he will have to spend money to counter them.
Who is this “other side”? Are there any rightwing billionaires spending big bucks on electing local district attorneys the way the Soros billionaires are doing?
You know all those lawless Democrat cities like New York City and Chicago and Los Angeles and San Francisco? Well, the reason they are lawless is Soros spent enough money there to elect soft-on-crime district attorneys who turn criminals loose as fast as they are arrested, so crime is rampant in the places this is done. You can thank George Soros and his money for ruining your communities.
Don’t forget Philadelphia.
One of the dot com billionaires was spending big bucks on get out the vote efforts in 2020, but only in highly Democrat districts.
AI is promoted today as “new” technology. AI has around 75+ year history of making fantastic claims about how it is going to transform society and replace humans. It´s a lot of 0 and ones. Based on what George Soros says, 1-0 to history – and the question about confusion.
The AI industry is not built on sturdy science, but on a metaphor. This is the metaphor that “the human brain is like a computer”. Ebbinghaus forgetting curve from the 1880s talks for it self. So, 1 to cognitiv learning and 0 to the metaphor.
The reality is like fx Robert Solow have said it: “Technology as a source of very rich people who have silly ideas about society, it can be a threat”. Again … 1 to reality 0 to George Soros.
The reality is unfortunately that the “missing link” as a headline when it comes to all of those crises, that many won´t accept, the most simple headline, that says, “Code red – science is about power not the truth”.
As a guide to people who do not understand how computers work, I would equate Artificial Intelligence with the formulae I could not carry around in my head and had to write down so it could be used when needed. AI has a whole lot of decades worth of data, indexed and ready to amaze you. Real intelligence was reserved for the formulae I truly understood and could easily remember the instant a suitable problem arose from my understanding of it and not from any other memory however young or old.
Computers are dumb but humans writing about AI taking us over are even dumber. And just to be clear the technology required to do some of the suggested things is a very long way away – and so why hasn’t AI solved that already ten times over and getting better every time …
Soros should be ashamed of himself if he has been paid for this rubbish.
What’s the best way to get started with ChatGPT? Not for serious reasons- just to check it out and maybe play around with it. I think some of the graphics at the top of essays here are done with it?? Is there a safe web site? I did find one recently but to use it I’d have to sign up and I didn’t really want to give my personal info to some potential monster. 🙂
Since I like the graphics so much, that’s really what I’m interested in. Also of course, asking it difficult climate related questions and maybe questions on the top of forestry, my profession, just to see if it’s already infected with typical forestry propaganda. Any suggestions about where to go to get a first exposure to ChatGPT will be appreciated.
https://graziamagazine.com/me/articles/love-letter-chatgpt/
That’s where the kids go.
P R E D I C T I O N S !!
who thinks this setback will be the ruin of Google
One could hope…
Personally, I DO think there are risks now that it’s gotten as advanced as it is, but I don’t think there is any way to quantify what those risks are – the unknown unknowns. The unintended consequences.
But whatever risks there may be, I don’t think it’s a Skynet or Matrix type of scenario. More of a societal disruption, which is pretty normal in general with advancing technology.
Even if the only signs in the Chatbot language are 1s and 0s, as of now it seems that they are being used to duplicate mostly English and what other languages? Are there French, Russian and Mandarin Chatbots? If we need to hide information from AI can we speak to each other in seldom used languages like Gaelic, Athabascan or Inupiat?
The giant AI systems are likely speaking to EACHOTHER in Python, so fast, under the radar.
Navajo would be a good choice.
How about Lakota?
Wind talkers
Yeah, I knew about them. Lakota seemed more apt for Lt. Colonel Custer, though 🙂
Well, I certainly respect Eric as a poster, and his AI qualifications are light years ahead of mine. However, I did have a research experience with AI, doing an effort merging processed satellite imagery, magnetics, gravity, and X-band radar data from the Space Shuttle. The application of AI was controlled by the training examples, or the instructions of where and what data to access to construct an outcome. I see the ChatGBT being trained to focus on WOKE and Trendy databases, and therefore constructing output for the left/liberal side. By the way, I could choose better training pixels, for Supervised Classification of imagery, than any AI product then available, and I bet I still can.
Have a look at AI imagery targeting fat Leopard 2’s and Bradley’s on the Steppe right now. And missile targeting. Add to that GPS jamming with coordinate replacement.
So ask, what is Palantir’s AI performance in battle?
There is a mad rush to quantum navigation – too late.
Agree.
You only have to look at responses from ChatGPT to see that it sources its info from “approved” publications & databases.
Only when challenged to correct its response does it admit to alternative sources.
(but then adds qualifications).
There was a group that was training a ChatGBT bot using conservative web sites. The group behind ChatGBT pulled their license for the sin of promoting misinformation.
The bigest threat to humanity is not artificial intelligence but natural idiocy.
Artificial Intelligence = Automated Idiocy
I guess there’s going to be more money in AI than climate change….
The Dod 2023 budget of $850 BILLION covers the Pentagon Palantir AIP AI platform with change.
Palantir, Pentagon AI is fully integrated with Bard and OpenAI. After all, where did Google actually spring from?
I’ve had fun playing around with Chat GPT, just asking it questions to see what it would answer. But I don’t trust its accuracy. I asked it to calculate my federal income tax for this year and it came up with a number very close to my own estimate. Then I asked it to calculate my property tax on my house for this year in Cook County, Illinois. My house was recently re-assessed by the county assessor. I asked Chat GPT to calculate my new property tax for this year based on my new property assessment and I gave it all the relevant local tax rates for my local taxing districts, my exemptions, the tax rate and other relevant numbers. The estimated property tax bill it came up with was so far off it wasn’t even in the ball park. A person would immediately recognize that was an incorrect estimate. Like I said, you can’t quite trust it yet.
Trust that all that personal finance info did not turn up on an IRS desk?
“My house was recently re-assessed by the county assessor.”
ChatGPT was trained on data up to late 2021. If the calculation relies on something after that then it can’t succeed without help.
I think the biggest problem with AI is the unemployment it will cause. Society must have a means of distributing wealth throughout the spectrum. Employment, jobs and work have done this so far. England is about to trial guaranteed basic income. The problem with this approach is that it disincentizes education which will ultimately leave society even more vulnerable to mad panic scams like climate change and AI. Also the fruits of democracy are entirely dependent on the education or otherwise of the voters. This problem is already apparent. There is an elegant solution but that’s another story.
I used to think that, now I’m not so sure. My uncle, sadly passed, who grew up as a peasant in pre-WW2 Slovakia, noticed most of the jobs today are invented. A rural village with near medieval technology has no need for diversity consultants, or any of the hundreds of make work jobs in today’s society. He saw no reason that wouldn’t continue in the AI age.
You might like this : finance consultant from near there : Alex Krainer’s TrendCompass
Property rights: the reality vs. the ideology of it Our idol worship is condemning future generations to indentured servitude. We must get past ideology and embrace clear thinking.
https://alexkrainer.substack.com/p/property-rights-the-reality-vs-the
Speaking of invented jobs, wasn’t there a class action a while ago by some James Cook Uni students who found that the employment opportunities spruiked by JCU in the course prospectus were, well – bullshit.
A settlement was offered iirc.
Yes but “Jobs In Name Only (“JINOs”),” to coin a phrase, contribute nothing useful to society (quite the reverse, if anything), and are therefore not “jobs” in the true, productive sense but effectively nothing more than an additional tax on the productive.
At one time, 99% of the population worked on farms.
Today something like 5% do.
Far from being poorer, we are all much wealthier.
Increases in productivity result in things getting cheaper and as a result people don’t have to work as much in order to maintain their desired standard of living.
The absolute worst thing that can happen is government stepping in to solve a problem that doesn’t exist.
The problem with the guaranteed income is not that people won’t want to get educated, its that there is absolutely no reason why anyone should go to work.
The main driving force of education is good employment. The industrial and technological revolutions, of which AI is an extension, means that much more is done with much less labour which is all good except that to share the benefit equitably new mechanisms must emerge. These new ways must lift humanity not debase it.
Careful going down that road.
There is nothing “equitable” about handing people money for doing nothing.
Nothing debases humans faster than stripping their lives of any and all purpose.
There is still good employment out there, for those who want it.
For those who want to sit on their butts and be rewarded for managing to breath successfully, they can starve as far as I’m concerned.
As for equitable, as long as you have a free market, people will be paid based on what their labor is worth to their employer. Nothing is more equitable than that.
“The problem with the guaranteed income is not that people won’t want to get educated, its that there is absolutely no reason why anyone should go to work.”
You missed the word ‘basic’. Most people would not be satisfied with a ‘basic’ income which would not allow them to purchase the luxuries of life.
However, some people would be satisfied with a basic income, especially people who enjoyed artistic painting, or practicing Buddhist meditations, and so on.
Those who chose to work for the additional money, would more likely enjoy their job, unlike so many people in France who were furious because the government increased the pension age by a mere couple of years.
Your fallacy is forgetting that only those who work are taxed.
Under any basic income plan, there will be enough people who don’t feel like working that the tax rates will be astronomical. Why bother working when the government takes 90% of your income in order to pay people who don’t want to work?
The other problem is that “basic” never stays basic.
The leeches outnumber the productive, they then whine to the politicians that it is unfair that they don’t have as much stuff as the people who work for a living. The politicans, as always, respond to the largest voting blocks, up the “basic” income, as well as taxes to pay for it.
The whole concept of a guaranteed income is one of the worst ideas communists ever came up with.
Finland trialled a basic income scheme for two years with 2000 participants. While it did help those participants the vast majority were still unable to find work and the scheme was stopped.
My only “concern” about “AI” is that it makes it so much easier for propagandists to get their messages out to the public, in a conversational and convincing way.
And too many will believe the horse shit that AI “bots” trained on “approved” data bases will spew as if it is unbiased and authoritative, when it is anything but.
A classic recent headline: “Artificial Intelligence Cites Artificial Case Law.”
There is a law firm in serious hot water over that fiasco.
Never mind existential threat, AI is an existential opportunity. AI is perfect as a replacement for “climate change” as a focus of hysteria. Fight against the doomsayers.
https://maxmore.substack.com/p/against-ai-doomerism-for-ai-progress
There is a simple demonstration that “AI” has gone down the wrong track. Show kids a horse, one horse, and, of course, tell them it’s a horse. Then show them a donkey, and they’ll ask “Is that a small horse?” Then show them a zebra and they’ll ask “Is that a striped horse?”
People do not need to be trained with millions of examples.
Actually we do – a human who never saw a zebra before might ask the same question. The millions of examples we receive via interacting with parents and teachers.
Where humans have the edge, and its a big one, is we have instincts honed by a billion years of evolution. AIs do not have these instincts, which makes them act really dumb in ways humans would never consider.
Here I disagree with you. Every tool is two edged. Call them AI or call them something else, thests types of software put great power into the hands of the surveillance state, to say nothing of the corporate overlords. While they may have all sorts of uses in many areas, they will be used against the population at large and most especially against individuals who don’t accept the reigning groupthink. There is no need for the programs to have a mind of their own.
AI is already ubiquitous, your vehicle satnav contains an AI, as does your smartphone, with predictive text and all its other features.
You are absolutely right, the surveillance state will be enhanced by AI. But mostly they will be distracted by other surveillance states, 99% of their energy will be taken preventing foreign interference. The remaining 1%, people are smart, we will figure out ways to deflect government intrusions into our lives.
No doubt just like the majority of China’s peoples are doing.
The Chinese Great Firewall has turned out to be quite porous.
Far more dangerous than AI are (in no particular order):
And because they are dumb, in the true sense of the word, they do not know when they are making a mistake. This could be truly disastrous when used in our military systems. One doesn’t need to paint the horrendous scenarios that may occur when dumb Defence personnel comes to rely on dumb computers to run their responses to perceived threats for them.
A couple of days ago I decided to test ChatGPT. I expected it to have been trained with the mainstream “consensus” pseudo-science and all the media blurb flowing from it, which it would encounter on the internet. I was curious to how it would respond to challenges. Maybe it would point out flaws in some of my ideas or bouncing it back and forth may strengthen my arguments.
a) I first asked in ENSO could have an effect on long term climate.
It said yes, but then produce a paragraph of the usual waffle about water “sloshing” which did not have any bearing on whether there was a long term effect or not.
b) I asked how La Nina affected OHC.
It said colder SST caused more heat loss to the atmosphere and thus reduced OHC. So I asked it to explain how cooler SST caused more heat loss, not less. It appologised for the “confusion” and said that was wrong, it was in fact due to increased winds and increased upwelling of deep ocean waters mixing with surface waters thus increasing OHC.
I pointed out that deeper water was colder and OHC could not be increased by simple mixing only by heat exchange with another medium.
It was again sorry for the “confusion” ….
After three stupid mistakes which were categorically wrong, not even controversial, I asked that after 3 mistakes I could have any confidence in the responses it was giving me.
I again apologized pointing out that it was a linguist model and not “infallible”.
What is most scary about all this is not capabilities of AI but the fact that people are prepared to believe anything produced by a computer is objective and beyond question.
Ask AI about climate science and it will explain to you it has six fingers on each hand.
‘glorious advances in medical science’???
Ha ha ha!
I agree with the writer’s assessment of AI.
I also am a software developer with over 30 years. I haven’t been contracted to write AI software, but I’ve researched it enough. Never mind the question of what is thinking or consciousness. Scary AI stories are based on the idea that the machine could develop an ulterior agenda, a self interest of its own. Anyone believing that doesn’t understand that a machine will always be just a machine. The malevolent entity is the human being programming the machine.
But there is a legitimate concern for the appropriate use of computers and software. Do you want a computer deciding your case in court? Do you want a computer deciding who will be your elected official? Do you want a computer driving your car or piloting your airplane? I think not.
The scare about AI will lead to government oversight of computer programming. Software engineers will eventually have government agencies looking over their shoulders.
Computers have been flying planes for decades.
The newest ones can both take off and land planes.
There is some potential for AI to propose medical diagnoses from diagnostic (Lab, imaging, etc) results after being trained on millions of patient charts. The weakness in this scheme is a part of the chart known as the “problem list”, which lists current and past diagnosed problems without metadata (currently under treatment?, onset and duration of problem? etc.). Problem lists are also incomplete — I’m 70, but it could be relevant that I had mononucleosis as an 18-year old.
This scheme would require a universal patient identifier, something many civil libertarians fear. AI complicates this further — if someone suffered childhood sexual abuse, might not an AI disqualify them from employment that involves access to children?
The danger in AI depends on what actions we allow them to take in real time. I’m not opposed to AI synchronizing traffic lights or managing air traffic control. An AI that involves value judgements, however (like a Chinese-style social credit scheme) is scary, though.
One difference I’ve noticed between veteran programmers and fresh out of school ones, is that when interviewing the client, the veterans have learned how to ask questions in order find out what it is that the client really wants.
I certainly agree that the climate crisis is a scheme to give the elite more control over the rest of us. But I think AI really has the potential to get away from us. The things the computers can do they do much better than we do, i.e. doing time consuming calculations in a fraction of a second or playing chess. And certainly there is a good chance AI will bring great benefits like finding cures for incurable diseases. Some folks have referred to AI as having godlike intelligence. Would we be able to control a god? I doubt it.
There will never be medical immortality, unless it means something very limited and nothing at all to do with immortality.
How many people in our Western societies are paid to be confident even if clueless and often wrong?
AI can express confidence (and be far cheaper).
Talking heads will be needed for TV… for now.