Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can’t understand is the level of acrimony toward LLMs. I see things like “stochastic parrot”, “glorified autocomplete”, etc. If you need an example, the comments section for the post on Apple saying LLMs don’t reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn’t expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
It’s a hugely disruptive technology, that is harmful to the environment, being taken up and given center stage by a host of folk who don’t understand it.
Like the industrial revolution, it has the chance to change the world in a massive way, but in doing so, it’s going to fuck over a lot of people, and notch up greenhouse gas output. In a decade or two, we probably won’t remember what life was like without them, but lots of people are going to be out of jobs, have their income streams cut off and have no alternatives available to them whilst that happens.
And whilst all of that is going on, we’re getting told that it’s the best most amazing thing that we all need, and it’s being stuck in to everything, including things that don’t benefit from the presence of an LLM, and sometimes, where the presence of an LLM can be actively harmful
Am Mixed about LLMS And stuff but i agree with this
I am not an AI hater, it helps me automate many of the more mundane tasks of my job or the things I don’t ever have time for.
I also feel that change management is a big factor with any paradigm shifting technology, as is with LLMs. I recall when some people said that both the PC and the internet were going to be just a fad.
Nonetheless, all the reasons you’ve mentioned are the same ones that give me concern about AI.
We’re outsourcing thinking to a bullshit generator controlled by mostly American mega-corporations who have repeatedly demonstrated that they want to do us harm, burning through scarce resources and rendering creative humans robbed and unemployed in the process.
What’s not to hate.
I know there’s people who could articulate it better than I can, but my logic goes like this:
- Loss of critical thinking skill: This doesn’t just apply for someone working on a software project that they don’t really care about. Lots of coders start in their bedroom with notepad and some curiosity. If copilot interrupts you with mediocre but working code, you never get the chance to learn ways of solving a problem for yourself.
- Style: code spat out by AI is a very specific style, and no amount of prompt modifiers with come up with the type of code someone designing for speed or low memory usage would produce that’s nearly impossible to read but solves for a very specific case.
- If everyone is a coder, no one is a coder: If everyone can claim to be a coder on paper, it will be harder to find good coders. Sure, you can make every applicant do FizzBuzz or a basic sort, but that does not give a good opportunity to show you can actually solve a problem. It will discourage people from becoming coders in the first place. A lot of companies can actually get by with vibe coders (at least for a while) and that dries up the market of the sort of junior positions that people need to get better and promoted to better positions.
- When the code breaks, it takes a lot longer to understand and rectify when you don’t know how any of it works. When you don’t even bother designing or completing a test plan because Cursor developed a plan, which all came back green, pushed it during a convenient downtime and has archived all the old versions in its own internal logical structure that can’t be easily undone.
Edits: Minor clarification and grammar.
I’m an empirical researcher in software engineering and all of the points you’re making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their “own” work
My main issue is that LLMs are being used to flood the internet with AI slop. Almost every time I search for something, I have to go through a lot of results to find one with any usable information. The SEO spam before AI was bad enough, now it’s significantly worse.
I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.
-
I don’t hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock
-
I hate the marketing and the hype for several reasons:
- You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
- The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
- All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
- The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages…
LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).
Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.
Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.
LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It’s genuinely terrifying.
-
I recently had an online event about using “AI” in my industry, construction.
The presentor finished on “Now is no the time to wait, but to get doing, lest you want to stay behind”.
She gave examples of some companies she found that promised to help with “AI” in the process of designing constructions. When i asked her, if any of these companies are willing to take the legal risk that the designs are up to code and actually sound from an engineering perspective, she had to deny.
This sums it up for me. You get sold a hype by people who dont understand (or dont tell) what it is and isnt to managers who dont understand what it is and isnt over the heads of people who actually understand what it is or at least what it needs to be to be relevant. And these last people then get laid off or f*ed over in other ways as they have twice the work than before because now first they need to show to management why the “AI” result is criminal and then do all the regular design work anyways.
It is the same toxid dynamic like with any tech bro hype before. Just now it seems to look good at first and is more difficult to show why it is not.
This is especially dangerous when it comes to engineering.
I personally just find it annoying how it’s shoehorned into everyting regardless if it makes sense to be there or not, without the option to turn it off.
I also don’t find it helpful for most things I do.
Emotional? No. Rational.
Use of Ai is showing as a bad idea for so many reasons that have been raised by people who study this kind of thing. There’s nothing I can tell you that has any more validity than the experts’ opinions. Go see.
I’m not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.
My criticism comes in three parts:
-
The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people’s critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It’s making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.
-
A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are “open source” but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It’s done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it’s a trade secret. If it’s not replicable, its scientific value is a little bit in question.
-
The AI business is rotten to the core. AI businesses like to pretend they’re altruistic innovators who take us to the Future. They’re a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren’t even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they’re creating. If you try to apply even a little bit of reasonable regulation to this - “hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training” - they immediately whinge about how you’re impeding the great march of human progress or someshit.
And I’m not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say “this is so easy to use that I can just make my programs myself, looks like I don’t need you any more, you’re fired, bye”, and a year later, the biz bros come back and say “this software that I built is a pile of hellish garbage, please come back and fix this, I’ll pay triple”. This is just Visual Basic for Applications all over again.
-
I feel like it’s more the sudden overnight hype about it rather than the technology itself. CEOs all around the world suddenly went “you all must use AI and shoe horn it into our product!”. People are fatigued about constantly hearing about it.
But I think people, especially devs, don’t like big changes (me included), which causes anxiety and then backlash. LLMs have caused quite a big change with the way we go about our day jobs. It’s been such a big change that people are likely worried about what their career will look like in 5 or 10 years.
Personally I find it useful as a pairing buddy, it can generate some of the boilerplate bullshit and help you through problems, which might have taken longer to understand by trawling through various sites.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people’s (or worse, AI’s) code is orders of magnitude harder than writing the same code yourself.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go
If we include languages like C#, javascript/typescript, python etc then that’s a huge portion of the landscape.
Personally I wouldn’t use it to generate entire features as it will generally produce working, but garbage code, but it’s useful to get boilerplate stuff done or query why something isn’t working as expected. For example, asking it to write tests for a React component, it’ll get about 80-90% of it right, with all the imports, mocks etc, you just need to write the actual assertions yourself (which we should be doing anyway).
I gave Claude a try last week at building some AWS infrastructure in Terraform based off a prompt for a feature set and it was pretty bang on. Obviously it required some tweaks but it saved a tonne of time vs writing it all out manually.
I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, “the answer.” After a while, it gets old and that anxiety mixes in with annoyance – a perfect cocktail of animosity.
And AI stole em dashes from me, but that’s a me-problem.
Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?
Bunshit.
Fraking toaster…
To me, it’s not the tech itself, it’s the fact that it’s being pushed as something it most definitely isn’t. They’re grifting hard to stuff an incomplete feature down everyone’s throats, while using it to datamine the everloving spit out of us.
Truth be told, I’m genuinely excited about the concept of AGI, of the potential of what we’re seeing now. I’m also one who believes AGI will ultimately be as a progeny and should be treated as such, as a being in itself, and while we aren’t capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I’m also deeply disappointed to see these people behaving this way - like madmen and cultists. And as a further note, looking at our species’ approach toward anything it sees as Other doesn’t really make me think humanity would be adequate parents for any type of AGI as we are now, either.
The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That’s my problem.
Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that’s how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.
Said it better than I did, this is exactly it!
Right now, it’s like watching everyone cheer on as the obvious Villain is developing nuclear weapons.
I‘ll just say I won‘t grand any machine even the most basic human rights until every last person on the planet has access to enough clean water, food, shelter, adequate education, state of the art health care, peace, democracy and enough freedom to not limit the freedom of others. That‘s the lowest bar and if I can think of other essential things every person on the planet needs I‘ll add them.
I don‘t want to live in a world where we treat machines like celebrities while we don‘t look after our own. That would be an express ticket towards disaster like we‘ve seen in many science fiction novels before.
Research towards AGI for AGI’s sake should be strictly prohibited until tech bros figure out how to feed the planet so to speak. Let‘s give them an incentive to use their disruptive powers for something good before they play god.
While I disagree with your hardline stance on prioritisation of rights (I believe any conscious/sentient being should be treated as such at all times, which implies full rights and freedoms), I do agree that we should learn to take care of ourselves before we take on the incomprehensible responsibility of developing AGI, yes.
To me, it is the loss of meaningful work.
Alot of people have complained “why take arts and coders jobs - make AI take the drudgery filled work first and leave us the art and writing!” The problem is: automation already came for those jobs. In 90% of jobs today, the job CAN be automated with no AI needed. It just costs more to automate it then to pay a minimum wage worker. Than means anyone who works those jobs isn’t ACTUALLY doing those jobs. They are instead saving their employer the difference between their pay and the amount needed to automate it.
Before genAI came, there were a few jobs that couldn’t be automated. Those people thought that they not only have job security, but they were the only people actually producing things worth value. They were the ones that weren’t just saving a boss a buck. Then genAI came. Why write a book, code a program, or paint a painting if some program can do the same? Oh, it is better? More authentic? It is surprising how much of the population doesn’t care. And AI is getting better - poisoned training and loss of their users critical thinking skills not withstanding.
Soon, the only thing proud a worker can be about their work is how much they saved their employers money; and for most people that isn’t meaning enough. Somethings got to change.
Hello, recent Reddit convert here and I’m loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn’t even come in an X86 variant (Lineage that is is, I’m aware android has been ported).
I smell bovine faeces. Or are you, in fact, an LLM ?
He dumped windows (for Linux) amd installed LineageOS (on his phone).
OP likely has two devices.
Calm down. They never said anything about the two things happening on the same device.
Lineage sounds a lot like “Linux.” Take it easy on the lad.
Could also be two separate things? I have a) dumped Windows and b) installed Lineage.
Why won‘t they tell us what they replaced windows with or on what they installed Lineage, though? The more people speculate, the more questions I have.
Very well could be!
The main reason the invoke an emotional response. They stole everything from us (humans) illegally and then used it to make a technology that aims to replace us. I dont like that.
The second part is that I think they are shit at what people are using them for. They seem like they provide great answers but they are far to often completely wrong and the user doesnt know. Its also annoying that they are being shoved into everything.
Google AI recently told me that capybaras and caimans have a symbiotic relationship where the caimans protect them so they can eat their feces