Have you been following any of the court battles involving LLMs lately?
The New York Times suing OpenAI.
Getty Images suing Stability AI.
Sarah Silverman and George R.R. Martin suing OpenAI.
All of those cases involve data that has been scraped.
(In the latter two cases, the memoir/novels were scraped from excerpts and archives found online).
It’s too late to say with complete certainty that it’s all legal (the appeal processes haven’t all been finished yet), but at this point it looks like using scraped and copyrighted data in training LLMs is legal.
Even if it’s going to turn out not to be legal, it’s very clear that nobody’s shying away from doing it, because we have the courts showing as a statement of fact that it’s been happening for years.
Everything you’ve written is just fantasy.
We have a lot of reality which contradicts it.
Every LLM company has been primarily relying upon scraping data (which we know to completely legal) and has been incorporated copyrighted and scraped data in its data sets (which is still legally a grey area, but is happening anyway).
NYT hasn’t actually won that case yet, so it’s pointless to bring up. OpenAI has publicly stated that NYT heavily has misrepresented their findings.
OpenAI’s value would plummet and crash if they gained a reputation for using illegal material to train their AI on, investors would drop them so fast.
This is just a simple fact. LLM providers reputation is heavily staked on the legality of their data.
So far the courts have ruled in these companies favor.
But it’s extremely likely illegaly scraped Dara from reddit would not pass the sniff test and debestate an offending companies reputation.
If you don’t understand why, you have to do some brushing up on why these LLM services are worth so much and who is using them and for what. Once you understand that, it becomes extremely apparent why legally owning the entire history of every reddit post ever would be extremely valuable, and why a 5bil price tag is actually not that crazy.
Scraping is legal
Have you been following any of the court battles involving LLMs lately?
The New York Times suing OpenAI. Getty Images suing Stability AI. Sarah Silverman and George R.R. Martin suing OpenAI.
All of those cases involve data that has been scraped. (In the latter two cases, the memoir/novels were scraped from excerpts and archives found online).
It’s too late to say with complete certainty that it’s all legal (the appeal processes haven’t all been finished yet), but at this point it looks like using scraped and copyrighted data in training LLMs is legal. Even if it’s going to turn out not to be legal, it’s very clear that nobody’s shying away from doing it, because we have the courts showing as a statement of fact that it’s been happening for years.
Everything you’ve written is just fantasy. We have a lot of reality which contradicts it. Every LLM company has been primarily relying upon scraping data (which we know to completely legal) and has been incorporated copyrighted and scraped data in its data sets (which is still legally a grey area, but is happening anyway).
NYT hasn’t actually won that case yet, so it’s pointless to bring up. OpenAI has publicly stated that NYT heavily has misrepresented their findings.
OpenAI’s value would plummet and crash if they gained a reputation for using illegal material to train their AI on, investors would drop them so fast.
This is just a simple fact. LLM providers reputation is heavily staked on the legality of their data.
So far the courts have ruled in these companies favor.
But it’s extremely likely illegaly scraped Dara from reddit would not pass the sniff test and debestate an offending companies reputation.
If you don’t understand why, you have to do some brushing up on why these LLM services are worth so much and who is using them and for what. Once you understand that, it becomes extremely apparent why legally owning the entire history of every reddit post ever would be extremely valuable, and why a 5bil price tag is actually not that crazy.