Philosophical masturbation, based on a poor understanding of what is an already solved issue.
We know for a fact that a machine learning model does not even know what a rosebush is. It only knows the colours of pixels that usually go into a photo of one. And even then, it doesn’t even know the colours - only the bit values that correspond to them.
That is it.
Opinions and beauty are not vague, and nor are free will and trying, especially in this context. You only wish them to be for your argument.
An opinion is a value judgment. AIs don’t have values, and we have to deliberately restrict them to stop actual chaos happening.
Beauty is, for our purposes, something that the individual finds worthy of viewing and creating. Only people can find things beautiful. Machine learning algrorithms are only databases with complex retrieval systems.
Free will is also quite obvious in context: being able to do something of your own volition. AIs need exact instructions to get anything done. They can’t make decisions beyond what you tell them to do.
Trying? I didn’t even define this as human specific
I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:
How do you distinguish between a person who really understands beauty, and someone who has enough experience with things they’ve been told are beautiful to approximate?
How do you distinguish between someone with no concept of beauty, and someone who sees beauty in drastically different things than you?
How do you distinguish between the deviations from photorealism due to imprecise technique, and deviations due to intentional stylistic impressionism?
Every step of the way, a machine learning model is only making guesses based on previous training data. And not what the data actually is, but the pieces of it. Do green pixels normally go here? Does the letter “k” go here?
What evidence do you have that human cognition is functionally different? I won’t argue that humans are more sophisticated for sure. But what justification do you have to claim that humans aren’t just very, very good at making guesses based on previous training data?
I’m sorry that you’re struggling. Perhaps if you answered any of the questions I posed (twice) in order to frame the topic in a concrete way, we could have a more productive conversation that might provide elucidation for one, or both, of us. I fail to see how continuing to ignore those core questions, and instead focusing on questions that weren’t asked, will help either one of us.
Philosophical masturbation, based on a poor understanding of what is an already solved issue.
We know for a fact that a machine learning model does not even know what a rosebush is. It only knows the colours of pixels that usually go into a photo of one. And even then, it doesn’t even know the colours - only the bit values that correspond to them.
That is it.
Opinions and beauty are not vague, and nor are free will and trying, especially in this context. You only wish them to be for your argument.
An opinion is a value judgment. AIs don’t have values, and we have to deliberately restrict them to stop actual chaos happening.
Beauty is, for our purposes, something that the individual finds worthy of viewing and creating. Only people can find things beautiful. Machine learning algrorithms are only databases with complex retrieval systems.
Free will is also quite obvious in context: being able to do something of your own volition. AIs need exact instructions to get anything done. They can’t make decisions beyond what you tell them to do.
Trying? I didn’t even define this as human specific
I couldn’t have put it better myself. You’ve said lots of philosophical words without actually addressing any of my questions:
Did you really just pull an “I know you are, but what am I?”
I’m not gonna entertain your attempt to pretend very concrete concepts are woollier and more complex than they are.
If you truly believe machine learning has even begun to approach being compared to human cognition, there is no speaking to you about this subject.
https://www.youtube.com/watch?v=EUrOxh_0leE&pp=ygUQYWkgZG9lc24ndCBleGlzdA%3D%3D
Every step of the way, a machine learning model is only making guesses based on previous training data. And not what the data actually is, but the pieces of it. Do green pixels normally go here? Does the letter “k” go here?
Here is an alternative Piped link(s):
https://www.piped.video/watch?v=EUrOxh_0leE&pp=ygUQYWkgZG9lc24ndCBleGlzdA%3D%3D
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
What evidence do you have that human cognition is functionally different? I won’t argue that humans are more sophisticated for sure. But what justification do you have to claim that humans aren’t just very, very good at making guesses based on previous training data?
I’m really struggling to believe that you actually think this.
I’m sorry that you’re struggling. Perhaps if you answered any of the questions I posed (twice) in order to frame the topic in a concrete way, we could have a more productive conversation that might provide elucidation for one, or both, of us. I fail to see how continuing to ignore those core questions, and instead focusing on questions that weren’t asked, will help either one of us.
I don’t make a habit of answering irrelevant red herrings.