• 0 Posts
  • 265 Comments
Joined 2 years ago
cake
Cake day: May 29th, 2024

help-circle

  • Its pretty well known that “lines of code” is a horrible metric to judge programmers with. It seems “number of new projects” is pretty similar, though at a higher level of abstraction.

    Unfortunately that metric is applied to a lot more than just programmers; and I think getting rid of it would involve completely restructuring the type of activity our society is oriented around, and would run up against the life philosophy of the people in charge.

    Of course I’m not against progress, but I’m talking about executives that don’t plan beyond the next quarter, politicians that don’t plan beyond the next election cycle, the endless pursuit of growth, and the inability of market economies to cope with the fact that sometimes inaction is more advantagous than action. All of this encourages endlessly churning out ‘new’ things, without designing those things to last or putting in the effort to maintain them.




  • The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

    Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven’t lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won’t make you forget how to write like using ChatGPT will.

    I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren’t good at using computers generally don’t do this, and might not even know how you would start trying to.

    For years ‘user friendly’ software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user’s brain into the computer and hide the computer’s internal state (so that its not implied that the user has to understand it, so that a user that doesn’t know what they’re doing won’t do something they’ll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every “smart” feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

    Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

    Now, I am of the opinion that the ‘mirroring the internal state’ method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn’t be accessible to people with different levels of ability. But just as a random person in a store shouldn’t grab a wheelchair user’s chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

    Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to ‘user friendliness’. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.


  • The absolute epitome of non-AI slop has got to be these creepy videos that were on YouTube back in ~2017:

    https://en.wikipedia.org/wiki/Elsagate

    Its exactly the kind of thing you’d expect would be the product of AI, but it actually came before AI. I think a lot of it was procedurally generated though, using scripts to control 3D software and editing software, so different character models could be used in the same scenes and different scenes could be strung together to make each video.

    I think a similar thing happens with those shovelware Android games. There’s so many that are just the same game with (incredibly poorly done) asset swaps that I think they must just make a game once and then automatically generate a thousand+ variations on it.




  • Hallucinations are an intrinsic part of how LLMs work. OpenAI, literally the people with the most to lose if LLMs aren’t useful, has admitted that hallucinations are a mathematical inevitability, not something that can be engineered around. On top of that, its been shown that for things like mathematical proof finding switching to more sophisticated models doesn’t make them more accurate, it just makes their arguments more convincing.

    Now, you might say “oh but you can have a human in the loop to check the AIs work”, but for programming tasks its already been found that using LLMs makes programmers less productive. If a human needs to go over everything an AI generates, and reason about it anyway, that’s not really saving time or effort. Now consider that as you make the LLM more complex, having it generate longer and more complicated blocks of text, its errors also become harder to detect. Is that not just shuffling around the necessary human brainpower for a task instead of reducing it?

    So, in what field is this sort of thing useful? At one point I was hopeful that LLMs could be used in text summarization, but if I have to read the original text anyway to make sure that I haven’t been fed some highly convincing falsehood then what is the point?

    Currently I’m of the opinion that we might be able to use specialized LLMs as a heuristic to narrow the search tree for things like SAT solvers and answer set generators, but I don’t have much optimism for other use cases.







  • Isn’t this an interesting property of market economies?

    Software and silicon chip manufacturing has literally nothing to do with food production and yet a ‘disaster’ (I.E. going back to the status quo as of a few years ago) in that industry will affect your ability to eat. Nothing has happened to the farmers or their fields, or to the logistics system that moves food from one place to another, and yet somehow things suddenly can’t find their way from where they are produced to where they are needed.

    Remember, this is supposed to be the most efficient way to allocate resources.



  • The Neverhood literally consists of photographs, it is as photorealistic as it is possible to be, and yet it has a very strong art direction. More modern titles like The Midnight Walk, Keeper, and Felt That Boxing are similar, though they are actually rendered rather than consisting of photographs and video. On the other side of the coin there are some visual effects that are quite abstracted from realo, but are also very GPU intensive, showing that just because an image doesn’t look like a photo doesn’t mean that its necessarily easy to render (note, that video is a human authored algorithm, not AI, though they do compare it to AI video generation).

    I used to have the same opinion that you express, but I think this was only ever really true in practice during the brown era, and not before or after. In fact some games like Thief 1&2, Half Life 1&2, and the Chronicles of Riddick were trying to be as photorealistic as possible at the time of their release, but are now pretty commonly praised for their “stylization” today. For example, the deep blacks and stark contrast of stencil shadows vs what you get with more modern lighting. I am reminded of a Brian Eno quote:

    Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided.

    We are even seeing some nostalgia now for the pissfilter era, though that’s not an enthusiasm that I share. I suspect that we will eventually see TAA ghosting and ray tracing artifacts, that are currently much hated, be recreated in a controlled way as a stylistic choice. In particular I think that Control will eventually be praised for the way that it basically incorporated ray tracing artifacts into its art style, by using sparkly mineral walls and a dreamlike atmosphere.