ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them
True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.
They’re like an annoying friend who just can’t shut up.
They aren’t thinking though. They’re making connection with the trained data that they’ve processed.
This is really clear when they are asked to write code worth to vague a prompt.
Maybe feeding them through primary school curriculum (including essays and tests) would be helpful, but I don’t think the language models really sort knowledge yet.