• irmoz@reddthat.com
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    edit-2
    7 months ago

    Philosophical masturbation, based on a poor understanding of what is an already solved issue.

    We know for a fact that a machine learning model does not even know what a rosebush is. It only knows the colours of pixels that usually go into a photo of one. And even then, it doesn’t even know the colours - only the bit values that correspond to them.

    That is it.

    Opinions and beauty are not vague, and nor are free will and trying, especially in this context. You only wish them to be for your argument.

    An opinion is a value judgment. AIs don’t have values, and we have to deliberately restrict them to stop actual chaos happening.

    Beauty is, for our purposes, something that the individual finds worthy of viewing and creating. Only people can find things beautiful. Machine learning algrorithms are only databases with complex retrieval systems.

    Free will is also quite obvious in context: being able to do something of your own volition. AIs need exact instructions to get anything done. They can’t make decisions beyond what you tell them to do.

    Trying? I didn’t even define this as human specific

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      7 months ago

      Philosophical masturbation

      I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:

      How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they’ve been told are beautiful to approximate?

      How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you?

      How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?

      • irmoz@reddthat.com
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        edit-2
        7 months ago

        I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:

        Did you really just pull an “I know you are, but what am I?”

        I’m not gonna entertain your attempt to pretend very concrete concepts are woollier and more complex than they are.

        If you truly believe machine learning has even begun to approach being compared to human cognition, there is no speaking to you about this subject.

        https://www.youtube.com/watch?v=EUrOxh_0leE&pp=ygUQYWkgZG9lc24ndCBleGlzdA%3D%3D

        Every step of the way, a machine learning model is only making guesses based on previous training data. And not what the data actually is, but the pieces of it. Do green pixels normally go here? Does the letter “k” go here?

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          What evidence do you have that human cognition is functionally different? I won’t argue that humans are more sophisticated for sure. But what justification do you have to claim that humans aren’t just very, very good at making guesses based on previous training data?

            • agamemnonymous@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              I’m sorry that you’re struggling. Perhaps if you answered any of the questions I posed (twice) in order to frame the topic in a concrete way, we could have a more productive conversation that might provide elucidation for one, or both, of us. I fail to see how continuing to ignore those core questions, and instead focusing on questions that weren’t asked, will help either one of us.