Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.
I kind of regret learning ML sometimes. Being one of the 10 people per km2 who understand how it works is so annoying. It’s just a fancy mirror ffs, stop making weird faces at it you baboons!
The best part is it’s not even that complicated of a thing conceptually. Like you don’t need to study it to kind of understand the idea and some of its limitations.
Do you really understand how it works? What would you call a neural network with mirror neurons primed to react to certain stimuli patterns as the network gets trained… a mirror, or a baboon?
What do you call a neuron “that reacts both when a particular action is performed and when it is only observed”? Current LLMs are made out exclusively of mirror neurons, since their output (what they perform) is the same action as their input (what they observe).
I can’t even parse what you mean when you say their input is the same as their output, that would imply they don’t transform their input, which would defeat their purpose. This is nonsense.
I kind of regret learning ML sometimes. Being one of the 10 people per km2 who understand how it works is so annoying. It’s just a fancy mirror ffs, stop making weird faces at it you baboons!
The best part is it’s not even that complicated of a thing conceptually. Like you don’t need to study it to kind of understand the idea and some of its limitations.
Do you really understand how it works? What would you call a neural network with mirror neurons primed to react to certain stimuli patterns as the network gets trained… a mirror, or a baboon?
ANNs don’t have “mirror neurons” lol
What do you call a neuron “that reacts both when a particular action is performed and when it is only observed”? Current LLMs are made out exclusively of mirror neurons, since their output (what they perform) is the same action as their input (what they observe).
I can’t even parse what you mean when you say their input is the same as their output, that would imply they don’t transform their input, which would defeat their purpose. This is nonsense.
Lol nice way to say you don’t understand shit about it :D
I’m afraid that’s not correct, but clearly this discussion is over.