• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle
  • Which end? The main story is just a narrative device, in fact you shouldn’t really obey the narrator at all. Calling any end “The End” doesn’t make sense in the context of the game, really. Unless you just broke out of the mind control facility three times then called it quits? That end is supposed to be non enticing so that you try literally anything else before putting it down. I think the going insane end sticks with me the most. Although the game dev commentary in the recent release is fun.




  • A lot of drugs cause permanent problems when abused, and are still prescribed. Testing is needed to figure out if there’s safe dosing and whatnot. Worse, safe dosage for one person may be incredibly unsafe for another, just like with depression meds which can permanently cause mental issues (in addition to depression) at normally prescribed and “safe” dosages. This is why honest discussions and ongoing check ins with your doctor is vital in any prescription change. Hell, penicillin almost killed my mom, and that’s relatively safe unless you have an allergic reaction.

    Definitely hard to test with drugs that offer non medical and very obvious side effects. Hopefully there is an interesting breakthrough in understanding mechanics so we can make safe PTSD helping meds, but something so drastically painful to the person having it may not have a safe cure because the systems that go haywire are so ingrained in the preservation systems of our brains.

    Brains are weird. Any tampering is possibly dangerous.









  • I love discord, for what it’s for. Quick synchronous talks you will never refer back to again. So not software development where indexable logs of information are necessary. I know discord has indexing, and now some form of forum. But every discord I’ve been to for development (especially modding communities) has a large corpus of synchronous logs where people get annoyed if you ask a question that was answered one before a long time ago with extremely common language making it nearly impossible to search for because the keywords have been used out of context of your question hundreds of times since the question was asked.

    If the Dev communities used the forums mode in discord more, it wouldn’t always solve it, but it’d be much better. There are better places than discord for these things, but I have been trying to meet people where they’re established.



  • And I wouldn’t call a human intelligent if TV was anything to go by. Unfortunately, humans do things they don’t understand constantly and confidently. It’s common place, and you could call it fake it until you make it, but a lot of times it’s more of people thinking they understand something.

    LLMs do things confident that they will satisfy their fitness function, but they do not have the ability to see farther than that at this time. Just sounds like politics to me.

    I’m being a touch facetious, of course, but the idea that the line has to be drawn upon that term, intelligence, is a bit too narrow for me. I prefer to use the terms Artificial Narrow Intelligence and Artificial General Intelligence as they are better defined. Narrow referring to it being designed for one task and one task only, such as LLMs which are designed to minimize a loss function of people accepting the output as “acceptable” language, which is a highly volatile target. AGI or Strong AI is AI that can generalize outside of its targeted fitness function and continuously. I don’t mean that a computer vision neural network that is able to classify anomalies as something that the car should stop for. That’s out of distribution reasoning, sure, but if it can reasonably determine the thing in bounds as part of its loss function, then anything that falls significantly outside can be easily flagged. That’s not true generalization, more of domain recognition, but it is important in a lot of safety critical applications.

    This is an important conversation to have though. The way we use language is highly personal based upon our experiences, and that makes coming to an understanding in natural languages hard. Constructed languages aren’t the answer because any language in use undergoes change. If the term AI is to change, people will have to understand that the scientific term will not, and pop sci magazines WILL get harder to understand. That’s why I propose splitting the ideas in a way that allows for more nuanced discussions, instead of redefining terms that are present in thousands of ground breaking research papers over a century, which will make research a matter of historical linguistics as well as one of mathematical understanding. Jargon is already hard enough as it is.




  • … Alexa literally is A.I.? You mean to say that Alexa isn’t AGI. AI is the taking of inputs and outputting something rational. The first AI’s were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people’s lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they’re designed to compute.

    The term AI is a lot more broad than you think.