I hear people saying things like “chatgpt is basically just a fancy predictive text”. I’m certainly not in the “it’s sentient!” camp, but it seems pretty obvious that a lot more is going on than just predicting the most likely next word.

Even if it’s predicting word by word within a bunch of constraints & structures inferred from the question / prompt, then that’s pretty interesting. Tbh, I’m more impressed by chatgpt’s ability to appearing to “understand” my prompts than I am by the quality of the output. Even though it’s writing is generally a mix of bland, obvious and inaccurate, it mostly does provide a plausible response to whatever I’ve asked / said.

Anyone feel like providing an ELI5 explanation of how it works? Or any good links to articles / videos?

  • Azzu@lemm.ee
    link
    fedilink
    arrow-up
    13
    ·
    10 months ago

    To be fair/to elaborate on that point, your dog is much much closer to human than chatgpt is, we share like 84% of DNA. Most of the same basic emotions like hunger, fear, desire, etc are present as well as the ability to learn and communicate.

    Your dog may not be “scheming”, because it lacks the ability to plan very far in the future, but it definitely has the intention of getting your attention and tries to figure out in the moment how to do it. Same as a human kid might do.

    It is incredibly valuable to act like a dog is human, because dogs do actually share a lot of characteristics. Not all of course, it’s still wrong to fully assume a dog is human, but as a quick heuristic it’s still valuable a lot (84%? :D) of the time.

    • huginn@feddit.it
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      Sure: I get that they’re not exactly the same. The ChatGPT issue is orders of magnitude more removed from humanity than a dog, but it’s a daily example of anthropomorphic bias that is relatable and easy to understand. Just was using it as an example.