yes, I am very sure, I work directly with this tech. It’s very good at making something that looks impressive but falls apart with any level of scrutiny.
My favourite part right now is that AI doesn’t actually translate, it is just constantly dreaming up text that looks like what you might expect, and it’s trained on a model that hopefully will impact that text to make it be valid.
but it’s often not, so it will hallucinate something totally untrue, or just absolutely made up and then make all the following text entirely about that thing. You might have some text about the fall of the soviet union. but the AI hallucinates the existence of a clown at some point because of some bias in the model maybe, now suddenly the fall of the soviet union was because of a vast clown plot.
Often it just gets totally screwed over by it’s own biases, like counting. god forbid your input text has something to do with counting, the AI’s will get stuck on counting things that don’t exist on that kind of thing so easily
all of this absolutely misses the fact that all the nuance is lost and the institutional knowledge is lost too.
To be absolutely clear, the current state of AI is very good at fooling middle managers and decision makers that it is good, because it’s built to look good. but it’s not even 5% the quality that we can have real people do things. and there is a mountain to get it there.
My guess is that over the next few years content quality online is going to go to shit and that will negatively impact those companies and sectors that utilize AI foolishly.
Hopefully we enter Gartner’s trough of disillusionment and companies back off from wholesale replacing humans with LLMs and recognize that in most cases they aren’t fit for purpose.
I think AI will have to go (far?) beyond LLMs to have any chance of replacing humans artists and writers with output of somewhat reasonable quality. (I may be wrong; maybe it is simply a matter of training very topic-specific LLMs)
Meanwhile if the impact isn’t sufficient the greedy and moronic will plow forward to the detriment of us all. Writers will struggle to support themselves and the Internet will become a lot less useful. It may be a very rough decade or two.
Are you sure about that? With the advances to AI in the last year, something like that seems trivial
yes, I am very sure, I work directly with this tech. It’s very good at making something that looks impressive but falls apart with any level of scrutiny.
My favourite part right now is that AI doesn’t actually translate, it is just constantly dreaming up text that looks like what you might expect, and it’s trained on a model that hopefully will impact that text to make it be valid.
but it’s often not, so it will hallucinate something totally untrue, or just absolutely made up and then make all the following text entirely about that thing. You might have some text about the fall of the soviet union. but the AI hallucinates the existence of a clown at some point because of some bias in the model maybe, now suddenly the fall of the soviet union was because of a vast clown plot.
Often it just gets totally screwed over by it’s own biases, like counting. god forbid your input text has something to do with counting, the AI’s will get stuck on counting things that don’t exist on that kind of thing so easily
all of this absolutely misses the fact that all the nuance is lost and the institutional knowledge is lost too.
To be absolutely clear, the current state of AI is very good at fooling middle managers and decision makers that it is good, because it’s built to look good. but it’s not even 5% the quality that we can have real people do things. and there is a mountain to get it there.
My guess is that over the next few years content quality online is going to go to shit and that will negatively impact those companies and sectors that utilize AI foolishly.
Hopefully we enter Gartner’s trough of disillusionment and companies back off from wholesale replacing humans with LLMs and recognize that in most cases they aren’t fit for purpose.
I think AI will have to go (far?) beyond LLMs to have any chance of replacing humans artists and writers with output of somewhat reasonable quality. (I may be wrong; maybe it is simply a matter of training very topic-specific LLMs)
Meanwhile if the impact isn’t sufficient the greedy and moronic will plow forward to the detriment of us all. Writers will struggle to support themselves and the Internet will become a lot less useful. It may be a very rough decade or two.
I’m hoping this’ll lead to a refreshed appreciation of expertise
This could come only from someone who hasn’t played around with any AI
Not at all, localization is very different from simple translation