I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.
They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.
A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark AI LLM generated texts (which the TLDR bots do).
Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.
I noticed those language models don’t work well for articles with dense information and complex sentence structure. Sometimes they forget the most important point.
They are useful as a TLDR but shouldn’t be taken as fact, at least not yet and for the foreseeable future.
A bit off topic, but I’ve read a comment in another community where someone asked chatgpt something and confidently posted the answer. Problem: the answer is wrong. That’s why it’s so important to mark
AILLM generated texts (which the TLDR bots do).Not calling ML and LLM “AI” would also help. (I went offtopic even more)
I think the Internet would benefit a lot, if peope would mark their Informations with sources!
Yeah that’s right. Having to post sources rules out usage of LLMs for the most part, since most of them do a terrible job at providing them - even if the information is correct for once.