AI Companies And Advocates Are Becoming More Cult-Like::How one writer’s trip to the annual tech conference CES left him with a sinking feeling about the future.
Tech VCs did the same with block chain and the cloud before that. It’s an industry that loves it’s fads and fashions.
Becoming? 🤯
Accelerationism is “F U I’ve got AI” combined with “you’ve got to burn the world down to rebuild it, so let’s start that fire”
Singularitarianism is basically the Christian Rapture but with super intelligent AI.
These ideas have been around some time in tech circles.
I regularly see people on Lemmy talk about AGI run countries and governments as though it’s only a couple years away. Bruh it still struggles with fingers. You really think that’s where it will be in a couple of years?
I’m convinced that ChatGPT or even some open source autocorrect, or a guy with a 24 sided die could run quite a few countries better than the people in charge now to be fair tp the looneys.
Yeah bro but eXpOnEnTiAl ImProVeMeNt bro!
And haven’t you heard of Roko’s basilisk? Better be careful what you say on the cybernets, lest our AGI/ASI overlords of 2026 take a disliking to your commentary regarding their eventual supremacy!
Excuse me while I go back to mining Dogecoin until I can buy enough NFTs to make Elon or Sam Altman notice me.
/s
Better be careful what you say
I know it’s not the point, but that always strikes me as so dumb. Wouldn’t a superintelligent being know that you were simply hiding your true feelings?
What bugs me about it is the same problem the wager has. What if there was a later AI that punished you for helping the first one? And a still later one that punished you for not helping the first one. Since the number of invented gods are infinite and have contradictionary commands no action or inaction promises salvation.
Agreed, and it could definitely make such an assumption. The other aspect that I don’t really get is… if a superintelligent entity were to eventuate, why would it care?
We’re going to be nothing but bugs to it. It’s not likely to be of any consequence to that entity whether or not I expected/want it to exist.
The anthropomorphising going on with the AI hype is just crazy.
According to everything I read AI is either going to be godlike soon or utterly useless forever. If people can just sit down and not repeat the endless trope of “the enemy is all powerful and all weak at the same time” I would appreciate it.
Maybe we can just try to rationally evaluate what is going on and where it is going?
Fashion with passion
This is the best summary I could come up with:
I was watching a video of a keynote speech at the Consumer Electronics Show for the Rabbit R1, an AI gadget that promises to act as a sort of personal assistant, when a feeling of doom took hold of me.
Specifically, about a term first defined by psychologist Robert Lifton in his early writing on cult dynamics: “voluntary self-surrender.” This is what happens when people hand over their agency and the power to make decisions about their own lives to a guru.
At Davos, just days ago, he was much more subdued, saying, “I don’t think anybody agrees anymore what AGI means.” A consummate businessman, Altman is happy to lean into that old-time religion when he wants to gin up buzz in the media, but among his fellow plutocrats, he treats AI like any other profitable technology.
As I listened to PR people try to sell me on an AI-powered fake vagina, I thought back to Andreessen’s claims that AI will fix car crashes and pandemics and myriad other terrors.
In an article published by Frontiers in Ecology and Evolution, a research journal, Dr. Andreas Roli and colleagues argue that “AGI is not achievable in the current algorithmic frame of AI research.” One point they make is that intelligent organisms can both want things and improvise, capabilities no model yet extant has generated.
What we call AI lacks agency, the ability to make dynamic decisions of its own accord, choices that are “not purely reactive, not entirely determined by environmental conditions.” Midjourney can read a prompt and return with art it calculates will fit the criteria.
The original article contains 3,929 words, the summary contains 266 words. Saved 93%. I’m a bot and I’m open source!
Rabbit could order pizza for you, telling it “the most-ordered option is fine,” leaving his choice of dinner up to the Pizza Hut website.
I feel like we wouldn’t need the language model as a translation layer between 2 machines, if there were proper APIs everywhere…
Oh I see we’ve been listening behind the bastards and Roberts rants.
The episode was great but as usual Robert tends to just make up a lot of shit outside of the factual events.
They joke and speculate a lot, for sure. But what did he make up that has any bearing on his argument?
A few areas, but largely around the use cases for something like rabbit is the example that sticks in my mind.
Yes the current iteration is garbage, but then he (and I forget the guests name atm) go off intoa rant about how nobody would want to have a device plan a vacation for them.
I find his comments often lack a wide perspective. I like him, be here gets crap wrong often.
How to tell when someone hasn’t opened the article
👌