Essay by Eric Worrall
The real reason Silicon Valley silent quit on the Democrats, and quietly backed President Trump’s America first energy policies.
Let’s dive straight in. What you are seeing below is (hopefully) a real neural network running in your web browser. The neural network wants to learn how to drive.
An explanation. Modern artificial neural networks usually start by generating a set of random neural networks.
As you can imagine, a randomly generated neural network is unlikely to be a good solution to the problem. But by chance, some of the randomly generated networks will be slightly less awful than their siblings.
Those slightly less awful neural networks are used as a guide to creating the next generation population of neural networks. The actual method of extracting the goodness from the previous generation differs, but it can involve backpropagation (mathematical optimisation for expected results), genetic algorithms (“breeding” the best networks like you would breed cattle, in the hope the child networks will inherit the best from both parents), and mutation – photocopying a parent neural network but randomly changing a few parameters to see if those random changes result in a better neural network.
Eventually if you continue long enough, some real skill should begin to emerge.
Evaluating the performance of neural networks is itself a hot topic of research. The following article discusses some of the implications of different approaches to evaluating performance.
Why do neural networks require so much power?
The reason is the software artificial neural networks execute is trash. AI researchers are nowhere near unravelling all the shortcuts and hacks our human brains use to discover solutions.
But this lack of quality can be compensated by quantity – using acres of computers burning hundreds of megawatts of power, to do what our human brains can do with less power than it takes to run your TV.
If you press “high powered computer” on the demo above, you can see a glimpse of how lots of computing power can help. Instead of testing one neural net at a time, in “high powered computer” mode, the neural network above evaluates the driving ability of all 10 neural networks in a single generation simultaneously.
In a similar way, evaluating millions of neural networks containing millions of neurones simultaneously on gigantic data centre computers can compensate for the poor quality of the primitive software which drives the individual neural networks.
What is the point of spending all this cash and pouring all these resources into neural networks?
Have you ever tried playing the chess app on your computer? Unless you are a very good chess player, that simple chess app will kick your butt every time, because the chess AI always knows the right move.
My belief is what that chess app does to you when you play is what AI research could do to the entire planet. Advanced neural networks will help tech giants to always make the right move. They will always know exactly what to put on that teleprompter to persuade their audience.
Tech giants may also be spending big on medical life extension, Google hiring Ray Kurzweil in 2013 to a senior position raised eyebrows in the industry. In addition to being a tech genius, Kurzweil is one of the world’s leading proponents of using AI technology to research life extension and medical immortality.
All that a man hath he will give for his life.
Tech giant top executives want to win, and keep on winning forever.
But to win, tech giants have to ditch green energy. To compete with Asia, which is also going big on AI technology, they need rock solid reliable energy supplies at a price comparable to what Asian tech giants pay. Which is why tech giants now want their own in-house nuclear reactors.
I like to think I’m making a difference writing for WUWT, but when the dust settles it won’t be anything I ever wrote or said which kills the green energy movement.
The people who deliver the final death blow to the green energy movement will be those who were once its strongest proponents.
But there is a dark lining to this silver cloud. If tech giants succeed, if they gain the ability to always know the right move to advance their goals, personal, political, social, and of course financial, perhaps in the future we shall come to miss the good old days when all of our opponents were human beings like ourselves.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
If one looks at wind and solar as supplying a grid, they do not work. Intermittency is the killer, and barring some storage device with science fiction performance, no way will it work.
But I think the hard greens do not care. Industrial society is their enemy.
We can never convince the neo Luddite lunatics who want to return to the Stone Age (but use their automobiles to drive to protests).
But soon hardcore greens will be irrelevant. Their most important supporters now have other concerns.
This unit has suffered a philosophical dichotomy and has shut down.
Contact customer support…
LOL 🙂
Have you tried turning it off and back on again?
The IT Crowd
Well….
If all the tech giants have AI that tells them the “right” answer at the “right” time, they will soon find out that they didn’t account for a little known disrupter that turns them all on their heads. This is the way it has always been in tech. DEC never saw SUN coming, who never saw Microsoft coming who never saw Google coming. DEC and SUN are gone, Microsoft survived only because they had so many businesses other than the browser. But their vision of being the gateway to the internet by controlling the browser was shattered when Google showed that it was the search engine that was the real gateway to the internet.
How will AI change that? I don’t think it will. AI can only proceed on information available to it. Some start up with a brand new idea is not in the data set that it has available.
That said, AI is already battling AI every day. Pretty much every IT security product is built on AI. DarkTrace, Crowdstrike, SentinelOne, ProofPoint.. I could go on. But they bubble up their results to reall humans who make the final call on the highest risk events. On the other side of the equation are the bad guys who have nation state capabilities in some cases. They ALSO have AI tools probing for weak points and presenting human beings with the highest value targets to pursue.
So IT security is about humans fighting humans. They both have AI tools to assist them, but at end of day its humans fighting humans. I don’t think any company is going to control the market by having the “right” answers from AI. AT days end, the market is people pursuing business from people. Sometimes when I am buying a new product, I will ask AI to summarize the available options and their pros and cons. ChatGPT does this with ease, but it misses lesser known options and the spefics of its pros and cons may vary widely from my persoal use case. So all ChatGPT does is give me a starting point in seconds that would have taken me hours to assemble on my own. But I still make the decision and it is an options from ChatGP as often as it is not.
The point you are missing is the process is unstable. Each advance cements the dominance of those who make the breakthrough. Once an AI Henry Ford appears, they will dominate and put down the competition, at least for a time.
There is no second prize for ultimate tech dominance.
“Pretty much every IT security product is built on AI.”
I don’t get it.
If you ask IA to draw a street, you can look at it and see that it looks like a street.
If the IA tells you that a piece of software is dangerous and hostile, how the hell are you going to know if it’s really the case?
we shall come to miss the good old days when all of our opponents were human beings like ourselves
“Wonderful” thought to lead into the weekend.
/sarc
Truth, but sheesh, could this have not waited until Monday?
That’s what my wife says 🙂
Some actual numbers for perspective. I just researched them.
The US newest and fastest supercomputer is ‘Aurora’ at Argonne National Lab. It has 9000 nodes, each node comprising 2 Intel Zeon u11 plus support chips. So 18000 of Intels best and newest multi core microprocessors. It consumes 38.7 MW per hour of operation.
The ChatGPT LLM currently runs on 10000 Nvidia H100 GPUs. In training mode, it consumes 51.8MW per hour of operation.
Neither is possible with green energy. So big tech has abandoned green renewable electricity despite all the lofty promises, and is trying to go nuclear.
Thanks Rud. It’s pretty clear their intentions go well beyond Aurora, so far there appears to be no limit to the benefit tech companies can achieve by throwing more computer capability at the problems they want to solve.
The really interesting thing is that nobody will say what Aurora is for. I looked for an hour out of curiosity. $500 million for something ‘secret’.
Argonne itself says it works mainly in physics, chemistry, materials science, and computer science.
It sure isn’t nuclear weapons simulation. For that, Los Alamos also just got a new Intel based supercomputer—‘only’ about 2500 Intel Zeons.
Maybe drug research? Using AI to sift approved drugs and novel chemicals for new uses is big business these days.
https://solutions.cas.org/drug-repurposing
Yes Argonne is not a NNSA lab so likely no bomb stuff. Could be climate modeling! There was an exascale model in development.
“so far there seems to be no limit to the benefit tech companies THINK they can achieve…”
There, fixed it.
“. . . by throwing more computer capability at the problems they want to solve.”
I’m a software engineer. How a high-speed adding machine can be intelligent is a bridge too far. And I have a bridge for sale, if you need one. Take your choice.
Had Turing lived in today’s environment, we might have advanced our computer technology far beyond what it is now. Maybe Turing’s AI would actually be intelligent.
I think you’re right, Eric. The other part of this is that the subject of nuclear is now being cautiously portrayed in a positive light in MSM outlets including NPR. Here’s a story in today’s NY Times.
Nuclear Power Was Once Shunned at Climate Talks. Now, It’s a Rising Star.
I wonder if wealthy big tech players who now want the public to accept nuclear everywhere influenced this shift in position?
That’s exactly what’s happening. Everything related to propaganda in the MSM is scripted and coordinated When I first heard a almost-pro nuclear story on NPR a few weeks ago, I knew that I’d be hearing more. If they stick to their formula, it will start slow. In a couple of years I expect there will be several stories a day.
The other part of this relates to Elon. He will be encouraged by his friends to help clear some of the regulatory hurdles that are designed to keep nuclear from being deployed in less than 20 years. I expect to see some streamlining, especially for small modular reactors. I think the administration will go along with this.
A possible way to bypass the regulatory bottleneck would be to “nuclearize” all of our federal military bases/facilities. Going nuke would be a National Security project, and the purposely built in overcapacity would be sold to the grid. Today’s TVA.
RC, you are on to something. When the big tech Greenies want to go nuclear, it means reality is setting in to the likes of Google and Microsoft—whose core staff are in deep green Silicon Valley and Seattleland. Staff heads exploding. Times are changing.
I completely disagree with you on this. There is no set of words, even delivered with all the polish and poise of a trained speechifier, that will persuade an entire audience of anything. AI can figure out the best solution to the travelling salesman problem, and plenty of others where the parameters are limited and the end goal is either well-defined or easily recognisable. Human beings as a group are far too messy to parametrise except in the broadest strokes.
Even if they could, there’s one failsafe method to confound the machine – disconnect.
+++CARRIER LOST+++
With a large number of zoology courses and even taught a few, none dealt with the brain much. Nevertheless, I agree with you as it seems to me that you can have exponential numbers of neural networks well above 9000 and still not know about the chemistry of the brain. If we can control the earth’s temperature, save the ocean, produce the geological Anthropocene and all the other claims currently by ‘scientists’ pushing policy it sounds too much like cold fusion and its numerous current relatives. No offense to the real brain researchers with difficult tasks.
I forget who said this, but it’s blindingly true.
The human brain is the most complex thing in the universe that we know of, and possibly the most complex thing in the universe at all. Nature doesn’t ‘need’ such complexity for life to continue, therefore it may well be a complete aberration.
You don’t have to convince everyone, convincing the majority is enough.
ChatGPT might be at the stupid person stage of conversational ability, but before ChatGPT became a thing, how many of us predicted we would be able to have a conversation with a computer in the near future? Every upgrade to ChatGPT and rival large language AIs brings us closer to computers which can actually persuade people.
In a few cases we may already be there.
https://edition.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit/index.html
Anyone who had access to a computer since 1967, when Eliza was written. https://en.wikipedia.org/wiki/ELIZA
https://eliza.botlibre.com/
Yes but Eliza does not have ChatGPT’s ability to attempt to provide meaningful answers.
Eliza was a joke. It just repeated the input as a question. There was no intelligence involved. Sometimes it would change the subject–but it wasn’t due to actual thought. And it pretended to be a psychiatrist–like most psychiatrists.
HAL: “I’m afraid I can’t do that Dave.”
Hard not to see AI driving centralized authoritarian systems
The certainly represent a large concentration of capability. But whether that concentration of capability is enough to topple freedom is yet to be determined.
Whether or not “that concentration of capability is enough” is pry power out of the hands of politicians who want what they want is yet another matter.
For many years I’ve believed that you can predict human behavior pretty accurately if you simply assume that people will always act in their own self-interest almost all of the time. Don’t look at their fancy words. Don’t look to fancy psychology theories. Look at what they actually do.
Case in point. The tech giants were all for green energy and saving the planet when it was at the expense of other people, and it didn’t really affect them. Very easy to virtue signal and good public relations. Put up a few roof top solar collectors and brag about being green. But as soon as it affects their ability to compete with Asian companies, suddenly green energy isn’t so important.
It was like the immigration crisis. New York, Chicago, Los Angeles didn’t really care that illegals were flooding into Texas. They virtue signaled that they were sanctuary cities. But then the governor of Texas began bussing the illegals to their cities and suddenly they became outraged.
Assume that people will always act in their own best self-interest and you won’t be wrong very often.
Adam Smith figured that out long ago.
Some relevant recent examples:
There is but one plausibly relevant counter example, Trump and MAGA. I say plausibly because when you are a multibillionaire with a big ego (Trump Tower, Trumpforce 1), legacy can also be in self interest.
Altruism can also be self interest.
I want my kid to inherit a world with the same or better opportunities than those I enjoyed, and I most certainly want to enjoy comfort in my old age rather than having to pick through trash for scraps of food like they do in Venezuela. Which is why I support politicians like Trump.
First two points make me think about how technologically-extended lifespans will affect lifetime supreme court appointments.
SciFi writers have beat on this semi-immortal theme a bit. I think it is called going “emeritus”, alive but not mentally relevant.
Who decides?
“Nancy Pelosi just submitted paperwork at age 84 to run again in 2026.”I don’t want to be in charge of telling her.
“I don’t want to be in charge of telling her (Pelosi).”
I do. Don’t!!
Which is why socialism always devolves into a brutish failure.
An infinite number of monkeys, typing on an infinite number of typewriters, will eventually reproduce the Bible, or Shakespeare — pick your favorite — so long as “eventually” includes an infinite amount of time, and an infinite availability of bananas (or electricity if its AI). Asking “Okay, but exactly when” will I benefit from this thing is considered to be troublesome.
The driving example, as with most other tasks that might be proposed for AI, are best taught empirically by humans. Carnegie Mellon was experimenting with driving computers in the 1990s, and the basic work from CM underlies a lot of the self-driving car development going on right now. Expert systems, around since the 50s, are “taught” (programmed), by human experts.
If big tech AI gets everyone access to cheap and readily available electricity, I’m all in. But I don’t have to believe in the hype, and I won’t believe it until my bill goes down.
The accumulation of experience is what distinguishes AI from the infinite monkey typing pool. The first generation of neural networks in the demo above is like monkeys typing random characters, but each generation of improved neural networks accumulates lessons learned by the previous generation, until it finally converges on a solution.
Ok, agreed. Still, the question is when? How do I know when it is done? Is “done” enough? When does it stop learning, and how will that be any better than just programming for the task? What is the advantage of computer self learning over an empirical programming model?
Carnegie Mellon had self-driving systems two or three decades ago (and they were almost open source), yet today’s commercial self-learning efforts are not very much better at completing the job, and still manage to run over the occasional pedestrian.
It is pretty, but is it any good?
Is it just a new Theranos? How can I tell?
The AI industry is full of hype just like any tech bubble, but there are some kernels of real progress mixed in with the nonsense.
As to when it ends, the answer is never. The technological singularity is the promise of unlimited capability. The tech giants driving the current AI push want to realise that promise of limitless power to the full extent of their abilities.
Unlimited capacity, perpetual motion, free energy, singularity, deux ex machina — nevertheless starved for electrical power, jealously limiting competition, requiring funding, police powers, tax breaks and regulating legislation.
Maybe this adds up to you, or maybe this is really a religious discussion best avoided, but I get nervous around attribution of universal benevolance to tech giants — were those Greek or Norse giants?
dk_ not all tasks lend themselves to programming. Image recognition is a good example. How would you write a program to distinguish between a dog and a cat? How would you deal with variations in perspective, lighting, color, size? How does a child learn the difference? Can you explain the difference?
I don’t think LLMs ever stop learning, and this I suspect will lead to a problem. Today most of the training data is human generated. What happens when AI-generated content becomes a significant portion of the on-line information? How do you prevent different AI models from training on each other’s output and diverging away from reality, or humanity?
Well, those image and pattern recognition programs exist, and (with your spec) to my knowledge have existed since at least the 80s, require human programming and image database developement, and don’t require large-language models.
I’ve actually worked peripherally on a couple of those. They didn’t use large language models, but extensive human developed pattern databases, and delivered low-confidence results until corrected and tuned by human operators.
The difference is that one has to program a computer to imperfectly simulate the behavior, but programming children is frowned upon – at least in my social circle.
AI is a name given to a group of programming and software design paradigms and techniques, not a specific thing except for a few recent product names. Many of them have been implemented since the 50s in many different ways. Behavioral evolution as a programming design for machine learning was demonstrated in the early 70s in Conway’s Game Of Life, but probably before that.
Yes, the fields of image recognition and computer language didn’t start with machine learning, but the performance of those models was limited.
I knew that we were headed for some major changes in machine learning when I first started training neural nets and decision trees on NVIDIA graphics cards a little more than a decade ago. What concerns me today, is not the technological progress, but the investments. What’s the prize? What’s the profit model that justifies restarting or building new nuclear plants? Is profit even the primary motive? While I suspect many want to replace Google search, I think there’s a lot more at stake.
yes, neural networks are well evidenced to be capable and useful in many ways.
What if that infinite pool of monkeys only manages a part of a sentence each, and someone or something is necessary to collate all of the pertinent fragments into the actual works of Shakespeare?
…young man, it is monkeys all the way down..
As soon as I saw the first “political” rumblings about accepting more nuclear (the next previous COP?) I expected it to be “nuclear for us, blowing in the wind for all you peasants”.
“An infinite number of monkeys, typing on an infinite number of typewriters, will eventually reproduce . . . .”
So you have a few good texts, with essentially an infinite number of nonsense texts. You won’t be able to separate the chaff from the seeds.
There’s an example of a printing press that prints every possible line. It will print every line of Shakespeare, every line of the the Bible–various versions, and every line that has been printed, is being printed, and will be printed. Unfortunately, there are an infinite number of nonsense lines that mean nothing. You may have one billionth of the lines that mean something and the rest (infinite) is nonsense. There is no way to separate the good from the bad.
AI is a joke. If it takes over, we are done.
Chess has a an extremely limited number of possible events. The computer can analyse many, many more than a human brain can. Humans are good at filtering unimportant or useless moves, so the best chess players are better than most computers.
As for making decisions in the Real World, most especially where people are involved, computers are almost completely useless. The number of parameters are way to large, and most of the time we don’t even know what parameters to use. Indeed, when we let neural networks essentially define their own parameters (through learning), they almost always stuff it up, unless those parameters (input data) are carefully managed.
I have written a few. The technology hasn’t improved much over the last 3 decades. Only the sheer amount of calculations per second.
The science has advanced, to the point where they are starting to be useful. For example one of my favourite advances was Ken Stanley’s NEAT system, he found a way of overcoming the competing conventions problem, and combining neural networks with genetic algorithms.
https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_topologies
You don’t drop hundreds of millions of dollars on a system which does not deliver value.
Aren’t you forgetting something? Hint: W&S on the grid.
“extremely limited“?
The game “go” has even fewer rules but is much harder. I think Zig Zag Wanderer is misunderstanding what makes a problem hard.
I think Chess Computers are not a good example of the sort of AI that could displace humans.
Chess is a hard logic problem. A finite, though large, set of choices lead to utterly predictable positions which can be evaluated for the single goal: checkmate.
Daily life human, business, military, science problems are not at all the same. They are in the realm of fuzzy logic, with incomplete and partially unreliable data and contain the character of the three-body-problem on steroids.
The decisions within any given brain are unknowable to those outside the brain until action follows—-there is a kind of Heisenberg Principle in human behavior.
And the behaviors emitted are complex, with many variables. How many different sorts of smiles are there? How different the smiles information content? Someone says “really”. How many intonations can there be in “really” that completely change meaning.
Until AI can use fuzzy logic, simultaneously solve for numerous variables each more complex than orbital mechanics, and do it all using multiple senses simultaneously——AI will never be more than be a powerful tool for specified applications.
Indeed, the ultimate AJ robot may just end up being a flesh and blood humanoid—-only “better“ than real humans in limited ways.
If silicon digital approaches fail to deliver, scientists will put living human or animal brain tissue into a jar and call it an AI. The breakthrough will be made, the only bit we cannot know for sure right now is how.
https://research.ufl.edu/publications/explore/v10n1/extract2.html
Eric W read science fiction as a lad.
I posted a link to scientists actually working on brains in bottles technology, and achieving some successes.
Have you ever tried playing the chess app on your computer?
__________________________________________________
https://www.chessgames.com/ has a daily puzzle:
Very easy on Monday, Insanely difficult on Sunday.
Most Chess apps have a level of difficulty selector.
Chess apps are programmed to give humans a chance to win.
My chess app is not so kind.
https://chess-3d.com
“Chess apps are programmed to give humans a chance to win.”
That was not my intention. I wanted the computer to win. I had some interesting if not profound ideas about playing chess on the computer. One was–with every move–I increased the number of options for my side and decreased the number of options for the other side.
While you describe a real problem, like always there is an opposing major benefit. AI that is “always right” on some good objective scale would, by definition, could come up with some very good solutions to some real world problems.
You point out a giant problem for AI – it has to be programmed in a way that allows it to approve of humane, not always logical concepts that the programmers and the funders support. DEI? ESG? How do you give a computer real data about windmills and have it agree that the best solution is more windmills?
The same basic problem emerged when 1960s scientific thinkers decided to abandon religious thoughts. What, then, is good? How do you define what is best in a way that gets 51% of humans to agree?
It doesn’t have to be a valid solution to win agreement. Look at communism – 100 million+ dead in one century, misery and starvation whenever it is attempted, yet we still have to deal with people who think it is worth a try.
AI will never develop a sense of self-awareness. AI will also never develop a sense of ethics or morality beyond the “bias” of it’s programmers.
AI can only produce “answers”.
It can’t judge whether that “answer” is morally or ethically right or wrong.
A significant number of humans have deficits in their ability to tell right from wrong, but are otherwise intelligent.
“A significant number of humans have deficits in their ability to tell right from wrong . . . ”
“The first principle is that you must not fool yourself and you are the easiest person to fool.”
― Richard P. Feynman
I wonder what the “Lateral Thinker” Edward de Bono would make of AI.
https://en.wikipedia.org/wiki/Edward_de_Bono
If I recall his presentations in the early 1970s, he pushed the idea that in problem solving, first-pass ideas / scenarios / solutions based on existing knowledge must all be in contention, no matter how implausible they appear at first look.
Then the basket of solutions should be considered that proved successful in similar problem situations and conditions (‘lateral’ thinking).
Finally, the entire basket of putative solutions should be winnowed as many times as it takes to be left with implementable solutions options.
Sounds to me very much like today’s AI methodology.
… then you throw away the technically correct answer and go with the one you wanted to see.
The innovation in science link I posted covers this. Most goal seeking algorithms prematurely discard possible solutions. AI researcher Ken Stanley argues you cannot know what important until you have all the pieces of the puzzle, and advocates a different approach to goal seeking.
https://wattsupwiththat.com/2015/04/02/the-search-for-novelty-in-science/
If you ask Perplexity for the lowest cost option to power a 10MW data centre in central NSW it will offer solar and battery. I doubt that is the best option. But it does suggest getting all the government handouts possible with such systems.
AI is being programmed by woke people for woke people. These people underestimate the capacity of a lunatic to disrupt their world.
Of current interest. This is what Perplexity has to offer on Tulsi Gabbard for DNI:
Yes Minister AIrhead-
Jobs minister’s humiliating gaffe in front of Australia’s tech bosses
With AI, as with any idea, you have to ask why do we need it? And who benefits from it?
Life is a continuous process of trying to be better than others. Those who succeed well can become the elites or the ruling class or the more wealthy. If you drop out of this ever-present race, you are not well off. To show your betterness, one option is to invent a new mousetrap.
AI is nothing more than a big new mousetrap that will come and go. As I type this, my wife is listening to talk back radio. For decades now, this mousetrap has grown, but signs of decline are there. Radio has let large numbers of people share ideas of Life, good and bad, that have helped influence whether we have a civil society or conflict. Radio is a crude view of future AI. Did we need it? Yes, it helped many folk find their levels in the grand Life competition. Who benefited? We all did, is one answer, if you accept that an absence of ways to share Life experiences is bad. Is harmonious cooperation better than individual greed? I do not know the answer, but hordes of people are religious because they think so.
This article speculating about future AI is no more than the modern version of gossip around the parish pump in earlier times. People will always want to share Life for tips that will give them competitive advantage. Universities are an outgrowth of this.
Finally, we get to the 2 questions for AI. Is it needed? Answer, immaterial, because it will happen. Who benefits? Those who take actions, by luck or by design, that return them more money and/or influence than the next person on the Ladder of Life. Who cares if it uses a lot of electricity? The upset individual always has the choice to adopt a lifestyle from 10 or 100 or 1000 years earlier. No need to get knickers in a knot about it. The most important act in Life is avoidance of death.
Geoff S
Well the bottom line is increasingly we can’t trust online dealing particularly with AI’s ability to produce undetectable counterfeit sites and people and goods-
Tractor scammers are fleecing farmers out of tens of thousands of dollars — and there’s nothing the banks can do about it
which ironically may drive us back to only dealing over the counter with real bricks and mortar establishments goods and people. Well at least for non-trivial transactions and interactions and even then only leaving deposits you’re prepared to lose.
At which everyone ultimately fails.
And AI won’t ever stop this.
(maybe that’s a good thing? Kevin Rudd around forever? Shoot me now!)
Trump is going to have fun with Rudd, use Rudd’s stupidity to force concessions from Australia, make Rudd a salesman for American interests. Rudd has already demonstrated he has no shame, but he is desperate to cling on to his post.
“Why Big Tech Ditched Big Green”
EW
No evidence is presented to prove that Big Tech ditched Big Green
Reports that AI data center will be investing in small nuclear reactors does not mean they reject green energy. SMRs ARE green energy. I have noy yet heard climate alarmists objecting to SMRs for data centers.
If they do object in the future, that could turn big tech against big green,
Media reports claim tech fims are were strong Harris supporters.
One rare exception is Elon Musk who may be a Trump supporter because he expected ntrump to win and wanted to government favors for Tesla, SpaceX and X. Supporting the winner usually has financial benefits
Workers at Tesla (TSLA.O) have contributed $42,824 to Harris’ presidential campaign versus $24,840 to Trump’s campaign, according to OpenSecrets, a nonpartisan nonprofit that tracks U.S. campaign contributions and lobbying data.
Employees at Musk’s rocket company SpaceX have donated $34,526 to Harris versus $7,652 to Trump.
Employees at the social media platform X, formerly known as Twitter, have donated $13,213 to Harris versus less than $500 to Trump.
SOURCE:
Workers at Musk’s Tesla, SpaceX and X donate to Harris while he backs Trump | Reuters
“SMRs ARE green energy”. If greens had embraced and focussed on nuclear from the start we wouldn’t be having an energy debate.
As for the evidence, I presented the article “The energy transition won’t happen”, which in turn references articles discussing Microsoft and other tech giants abandoning their green energy commitments.
Those green corporate “commitments” were just greenwashing and leftist virtue signaling.
They were as meaningless as most pre-election promises by politicians.
No, The People’s Republic of Silicon Valley was totally serious, they really did want to help the green energy revolution. Then along comes AI with its gargantuan energy demands, and everyone serious in the tech industry woke up and realised they needed real energy rather than green energy.
https://www.bleepingcomputer.com/news/security/fake-ai-video-generators-infect-windows-macos-with-infostealers/
Just something else to consider while running complex external code segments on one’s own computer.
I wrote this one, used reputable components for the bits I didn’t write.