Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.
How can they prove that not some abstract public data has been used to train algorithms, but their particular intellectual property?
Well, if you ask e.g. ChatGPT for the lyrics to a song or page after page of a book, and it spits them out 1:1 correct, you could assume that it must have had access to the original.
Or at least excerpts from it. But even then, it’s one thing for a person to put up a quote from their favourite book on their blog, and a completely different thing for a private company to use that data to train a model, and then sell it.
deleted by creator
Yeah which I still feel is utterly ridiculous. I love the idea of AI tools to assist with things, but as a complete replacement? No thank you.
I enjoy using things like SynthesizerV and VOCALOID because my own voice is pretty meh and my singing skills aren’t there. It’s fun to explore the voices, and learn how to use the tools. That doesn’t mean I’d like to see all singers replaced with synthesized versions. I view SynthV and the like as instruments, not much more.
I’ve used LLVMs to proofread stuff, and help me rephrase letters and such, but I’d never hire an editor to do such small tasks for me anyway. The result has always required editing anyway, because the LLVMs have a tendency to make stuff up.
Cases like that I don’t see a huge problem with. At my workplace though they’re talking about generating entire application layouts and codebases with AI and, being in charge of the AI evaluation project, the tech just isn’t there yet. You can in a sense use AI to make entire projects, but it’ll generate gnarly unmaintainable rubbish. You need a human hand in there to guide it.
Otherwise you end up with garbage websites with endlessly generated AI content, that can easily be manipulated by third party actors.
Can it recreate anything 1:1? When both my wife and I tried to get them to do that they would refuse, and if pushed they would fail horribly.
This is what I got. Looks pretty 1:1 for me.
Hilarious that it started with just “Buddy”, like you’d be happy with only the first word.
Yeah, for some reason it does that a lot when I ask it for copyrighted stuff.
As if it knew it wasn’t supposed to output that.
To be fair you’d get the same result easier by just googling “we will rock you lyrics”
How is chatgpt knowing the lyrics to that song different from a website that just tells you the lyrics of the song?
you could assume that it must have had access to the original.
I don’t know if that’s true. If Google grabs that book from a pirate site. Then publishes the work as search results. ChatGPT grabs the work from Google results and cobbles it back together as the original.
Who’s at fault?
I don’t think it’s a straight forward ChatGPT can reproduce the work therefore it stole it.
deleted by creator
Copyright doesn’t work like that. Say I sell you the rights to Thriller by Michael Jackson. You might not know that I don’t have the rights. But even if you bought the rights from me, whoever actually has the rights is totally in their legal right to sue you, because you never actually purchased any rights.
So if ChatGPT ripps it off Google who ripped it off a pirate site, then everyone in that chain who reproduced copyrighted works without permission from the copyright owners is liable for the damages caused by their unpermitted reproduction.
It’s literally the same as downloading something from a pirate site doesn’t make it legal, just because someone ripped it before you.
That’s a terrible example because under copyright law downloading a pirated thing isn’t actually illegal. It’s the distribution that is illegal (uploading).
Yes, downloading is illegal, and the media is still an illegally obtained copy. It’s just never prosecuted, because the damages are miniscule if you just download. They can only fine you for the amount of damages you caused by violating the copyright.
If you upload to 10k people, they can claim that everyone of them would have paid for it, so the damages are (if one copy is worth €30) ~€300k. That’s a lot of money and totally worth the lawsuit.
On the other hand, if you just download, the damages are just the value of one copy (in this case €30). That’s so miniscule, that even having a lawyer write a letter is more expensive.
But that’s totally besides the point. OpenAI didn’t just download, they replicate. Which is causing massive damages, especially to the original artists, which in many cases are now not hired any more, since ChatGPT replaces them.
there are a lot of possible ways to audit an AI for copyrighted works, several of which have been proposed in the comments here, but what this could lead to is laws requiring an accounting log of all material that has been used to train an AI as well as all copyrights and compensation, etc.
Not without some seriously invasive warrants! Ones that will never be granted for an intellectual property case.
Intellectual property is an outdated concept. It used to exist so wealthier outfits couldn’t copy your work at scale and muscle you out of an industry you were championing.
It simply does not work the way it was intended. As technology spreads, the barrier for entry into most industries wherein intellectual property is important has been all but demolished.
i.e. 50 years ago: your song that your band performed is great. I have a recording studio and am gonna steal it muahahaha.
Today: “anyone have an audio interface I can borrow so my band can record, mix, master, and release this track?”
Intellectual property ignores the fact that, idk, Issac Newton and Gottfried Wilhelm Leibniz both independently invented calculus at the same time on opposite ends of a disconnected globe. That is to say, intellectual property doesn’t exist.
Ever opened a post to make a witty comment to find someone else already made the same witty comment? Yeah. It’s like that.
Spoken by someone who has never had something you’ve worked years on, be stolen.
What was “stolen” from you and how?
deleted by creator
I think you said this facetiously… but it literally is.
https://www.howtogeek.com/310158/are-other-people-allowed-to-use-my-tweets/
deleted by creator
Copyright isn’t Twitter rules…
deleted by creator
Personally speaking, I’ve generated some stupid images like different cities covered in baked beans and have had crude watermarks generate with them where they were decipherable enough that I could find some of the source images used to train the ai. When it comes to photo realistic image generation, if all the ai does is mildly tweak the watermark then it’s not too hard to trace back.
All but a very small few generative AI programs use completely destructive methods to create their models. There is no way to recover the training images outside of infantesimally small random chance.
What you are seeing is the AI recognising that images of the sort you are asking for generally include watermarks, and creating one of its own.
Do you have examples? It should only happen in case of overfitting, i.e. too many identical image for the same subject
I’d think that given the nature of the language models and how the whole AI thing tends to work, an author can pluck a unique sentence from one of their works, ask AI to write something about that, and if AI somehow ‘magically’ writes out an entire paragraph or even chapter of the author’s original work, well tada, AI ripped them off.
I think that to protect creators they either need to be transparent about all content used to train the AI (highly unlikely) or have a disclaimer of liability, wherein if original content has been used is training of AI then the Original Content creator who have standing for legal action.
The only other alternative would be to insure that the AI specifically avoid copyright or trademarked content going back to a certain date.
Why a certain date? That feels arbitrary
At a certain age some media becomes public domain
Then it is no longer copywrited
They can’t. All they could prove is that their work is part of a dataset that still exists.
There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.
There are plagiarism and copyright laws to protect the output of these tools: if the output is infringing, then sue them. However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.
When you sell a book, you don’t get to control how that book is used. You can’t tell me that I can’t quote your book (within fair use restrictions). You can’t tell me that I can’t refer to your book in a blog post. You can’t dictate who may and may not read a book. You can’t tell me that I can’t give a book to a friend. Or an enemy. Or an anarchist.
Folks, this isn’t a new problem, and it doesn’t need new laws.
It’s 100% a new problem. There’s established precedent for things costing different amounts depending on their intended use.
For example, buying a consumer copy of song doesn’t give you the right to play that song in a stadium or a restaurant.
Training an entire AI to make potentially an infinite number of derived works from your work is 100% worthy of requiring a special agreement. This even goes beyond simple payment to consent; a climate expert might not want their work in an AI which might severely mischatacterize the conclusions, or might want to require that certain queries are regularly checked by a human, etc
Well, fine, and I can’t fault new published material having a “no AI” clause in its term of service. But that doesn’t mean we get to dream this clause into being retroactively for all the works ChatGPT was trained on. Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.
Fortunately the “horses out the barn” effect here is maybe not so bad. Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause - basically like the knowledge cutoff, but forever. It’s untenable, OpenAI will be forced to cave and pay up.
OpenAI and such being forced to pay a share seems far from the worst scenario I can imagine. I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don’t want their creations to be used by others for commercial purposes. That could really mean that large parts of humanity would be cut off from knowledge.
I can well imagine copyleft gaining importance in this context. But this form of licencing seems pretty worthless to me if you don’t have the time or resources to sue for your rights - or even to deal with the various forms of licencing you need to know about to do so.
I think it would be much worse if artists, writers, scientists, open source developers and so on were forced to stop making their works freely available because they don’t want their creations to be used by others for commercial purposes.
None of them are forced to stop making their works freely available. If they want to voluntarily stop making their works freely available to prevent commercial interests from using them, that’s on them.
Besides, that’s not so bad to me. The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.
That could really mean that large parts of humanity would be cut off from knowledge.
On the contrary, AI is making knowledge more accessible than ever before to large parts of humanity. The only comparible other technologies that have done this in recent times are the internet and search engines. Thank goodness the internet enables piracy that allows anyone to download troves of ebooks for free. I look forward to AI doing the same on an even greater scale.
Shouldn’t there be a way to freely share your works without having to expect an AI to train on them and then be able to spit them back out elsewhere without attribution?
The rest of us who want to share with humanity will keep sharing with humanity. The worst case imo is that artists, writers, scientists, and open source developers cannot take full advantage of the latest advancements in tech to make more and better art, writing, science, and software. We cannot let humanity’s creative potential be held hostage by anyone.
You’re not talking about sharing it with humanity, you’re talking about feeding it into an AI. How is this holding back the creative potential of humanity? Again, you’re talking about feeding and training a computer with this material.
Even the most reasonable law in the world can’t be enforced on someone who broke it 6 months before it was legislated.
Sure it can. Just because it is a new law doesn’t mean they get to continue benefiting from IP ‘theft’ forever into the future.
Imagine the FOMO and user frustration when ToS & legislation catch up and now ChatGPT has no access to the latest books, music, news, research, everything. Just stuff from before authors knew to include the “hands off” clause
How is this an issue for the IP holders? Just because you build something cool or useful doesn’t mean you get a pass to do what you want.
basically like the knowledge cutoff, but forever. It’s untenable,
Untenable for ChatGPT maybe, but it’s not as if it’s the end of ‘knowledge’ or the end of AI. It’s just a single company product.
The thing is, copyright isn’t really well-suited to the task, because copyright concerns itself with who gets to, well, make copies. Training an AI model isn’t really making a copy of that work. It’s transformative.
Should there be some kind of new model of renumeration for creators? Probably. But it should be a compulsory licensing model.
Copyright also deals with derivative works.
Derivative and transformative are quite different though.
The slippery slope here is that we are currently considering humans and computers to be different because (something someone needs to actually define). If you say “AI read my book and output a similar story, you owe me money” then how is that different from “Joe read my book and wrote a similar story, you owe me money.” We have laws already that deal with this but honestly how many books and movies aren’t just remakes of Romeo and Juliet or Taming of the Shrew?!?
Well, Shakespeare has beed dead for a few years now, there’s no copyright to speak of.
And if you make a book based on an existing one, then you totally need permission from the author. You can’t just e.g. make a Harry Potter 8.
But AIs are more than happy to do exacly that. Or to even reproduce copyrighted works 1:1, or only with a few mistakes.
If you say “AI read my book and output a similar story, you owe me money” then how is that different from “Joe read my book and wrote a similar story, you owe me money.”
You’re bounded by the limits of your flesh. AI is not. The $12 you spent buying a book at Barns & Noble was based on the economy of scarcity that your human abilities constrain you to.
It’s hard to say that the value proposition is the same for human vs AI.
Challenge level impossible: try uploading something long to amazon written by chatgpt without triggering the plagiarism detector.
My point is that the restrictions can’t go on the input, it has to go on the output - and we already have laws that govern such derivative works (or reuse / rebroadcast).
When you sell a book, you don’t get to control how that book is used.
This is demonstrably wrong. You cannot buy a book, and then go use it to print your own copies for sale. You cannot use it as a script for a commercial movie. You cannot go publish a sequel to it.
Now please just try to tell me that AI training is specifically covered by fair use and satire case law. Spoiler: you can’t.
This is a novel (pun intended) problem space and deserves to be discussed and decided, like everything else. So yeah, your cavalier dismissal is cavalierly dismissed.
I completely fail to see how it wouldn’t be considered transformative work
It fails the transcendence criterion.Transformative works go beyond the original purpose of their source material to produce a whole new category of thing or benefit that would otherwise not be available.
Taking 1000 fan paintings of Sauron and using them in combination to create 1 new painting of Sauron in no way transcends the original purpose of the source material. The AI painting of Sauron isn’t some new and different thing. It’s an entirely mechanical iteration on its input material. In fact the derived work competes directly with the source material which should show that it’s not transcendent.
We can disagree on this and still agree that it’s debatable and should be decided in court. The person above that I’m responding to just wants to say “bah!” and dismiss the whole thing. If we can litigate the issue right here, a bar I believe this thread has already met, then judges and lawmakers should litigate it in our institutions. After all the potential scale of this far reaching issue is enormous. I think it’s incredibly irresponsible to say feh nothing new here move on.
I do think you have a point here, but I don’t agree with the example. If a fan creates the 1001 fan painting after looking at others, that might be quite similar if they miss the artistic quality to express their unique views. And it also competes with their source, yet it’s generally accepted.
Transformativeness is only one of the four fair use factors. Just because something is transformative can’t alone make something fair use.
Even if AI is transformative, it would likely fail on the third factor. Fair use requires you to take the minimum amount of the copyrighted work, and AI companies scrape as much data as possible to train their models. Very unlikely to support a finding of fair use.
The final factor is market impact. As generative AIs are built to mimic the creativite outputs of human authorship. By design AI acts as a market replacement for human authorship so it would likely fail on this factor as well.
Regardless, trained AI models are unlikely to be copyrightable. Copyrights require human authorship which is why AI and animal generated art are not copyrightable.
A trained AI model is a piece of software so it should be protectable by patents because it is functional rather than expressive. But a patent requires you to describe how it works, so you can’t do that with AI. And a trained AI model is self-generated from training data, so there’s no human authorship even if trained AI models were copyrightable.
The exact laws that do apply to AI models is unclear. And it will likely be determined by court cases.
Typically the argument has been “a robot can’t make transformative works because it’s a robot.” People think our brains are special when in reality they are just really lossy.
Even if you buy that premise, the output of the robot is only superficially similar to the work it was trained on, so no copyright infringement there, and the training process itself is done by humans, and it takes some tortured logic to deny the technology’s transformative nature
Go ask ChatGPT for the lyrics of a song and then tell me, that’s transformative work when it outputs the exact lyrics.
This is a little off, when you quote a book you put the name of the book you’re quoting. When you refer to a book, you, um, refer to the book?
I think the gist of these authors complaints is that a sort of “technology laundered plagiarism” is occurring.
However, if the output of an AI would not be considered infringing for a human, then it isn’t infringement.
It’s an algorithm that’s been trained on numerous pieces of media by a company looking to make money of it. I see no reason to give them a pass on fairly paying for that media.
You can see this if you reverse the comparison, and consider what a human would do to accomplish the task in a professional setting. That’s all an algorithm is. An execution of programmed tasks.
If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers. STEM companies regularly pay for access to papers and codes and standards. Why shouldn’t an AI have to do the same?
If I gave a worker a pirated link to several books and scientific papers in the field, and asked them to synthesize an overview/summary of what they read and publish it, I’d get my ass sued. I have to buy the books and the scientific papers.
Well, if OpenAI knowingly used pirated work, that’s one thing. It seems pretty unlikely and certainly hasn’t been proven anywhere.
Of course, they could have done so unknowingly. For example, if John C Pirate published the transcripts of every movie since 1980 on his website, and OpenAI merely crawled his website (in the same way Google does), it’s hard to make the case that they’re really at fault any more than Google would be.
well no, because the summary is its own copyrighted work
The published summary is open to fair use by web crawlers. That was settled in Perfect 10 v Amazon.
Right, but not one the author of the book could go after. The article publisher would have the closest rights to a claim. But if I read the crib notes and a few reviews of a movie… Then go to summarize the movie myself… That’s derivative content and is protected under copyright.
Haven’t people asked it to reproduce specific chapters or pages of specific books and it’s gotten it right?
I haven’t been able to reproduce that, and at least so far, I haven’t seen any very compelling screenshots of it that actually match. Usually it just generates text, but that text doesn’t actually match.
It’s an algorithm that’s been trained on numerous pieces of media by a company looking to make money of it.
If I read your book… and get an amazing idea… Turn it into a business and make billions off of it. You still have no right to anything. This is no different.
If I gave a worker a pirated link to several books and scientific papers in the field
There’s been no proof or evidence provided that ANY content was ever pirated. Has any of the companies even provided the dataset they’ve used yet?
Why is this the presumption that they did it the illegal way?
If I read your book… and get an amazing idea… Turn it into a business and make billions off of it. You still have no right to anything. This is no different
I don’t see how this is even remotely the same? These companies are using this material to create their commercial product. They’re not consuming it personally and developing a random idea later, far removed from the book itself.
I can’t just buy (or pirate) a stack of Blu-rays and then go start my own Netflix, which is akin to what is happening here.
They’re not consuming it personally and developing a random idea later, far removed from the book itself.
I never said that the idea would be removed from the book. You can literally take the idea from the book itself and make the money. There would be no issues. There is no dues owed to the book’s writer.
This is the whole premise for educational textbooks. You can explain to me how the whole world works in book form… I can go out and take those ideas wholesale from your book and apply them to my business and literally make money SOLELY from information from your book. There’s nothing due back to you as a writer from me nor my business.
You’ve failed to explain how that relates to your point. Sure you can purchase an econonomics textbook and then go become a finance bro, but that’s not what they’re doing here. They’re taking that textbook (that wasn’t paid for) and feeding it into their commercial product. The end product is derived from the author’s work.
To put it a different way, would they still be able to produce ChatGPT if one of the developers simply read that same textbook and then inputted what they learned into the model? My guess is no.
It’d be the same if I went and bought CDs, ripped my favorite tracks, and then put them into a compilation album that I then sold for money. My product can’t exist without having copied the original artists work. ChatGPT just obfuscates that by copying a lot of songs.
They’re taking that textbook (that wasn’t paid for) and feeding it into their commercial product.
Nobody has provided any evidence that this is the case. Until this is proven it should not be assumed. Bandwagoning (and repeating this over and over again without any evidence or proof) against the ML people without evidence is not fair. The whole point of the Justice system is innocent until proven guilty.
The end product is derived from the author’s work.
Derivative works are 100% protected under copyright law. https://www.legalzoom.com/articles/what-are-derivative-works-under-copyright-law
This is the same premise that allows “fair use” that we all got up and arms about on youtube. Claiming that this doesn’t exist now in this case means that all that stuff we fought for on Youtube needs to be rolled back.
To put it a different way, would they still be able to produce ChatGPT if one of the developers simply read that same textbook and then inputted what they learned into the model? My guess is no.
Why not? Why can’t someone grab a book, scan it… chuck it into an OCR and get the same content? There are plenty of ways that snippets of raw content could make it into these repositories WITHOUT asserting legal problems.
It’d be the same if I went and bought CDs, ripped my favorite tracks, and then put them into a compilation album that I then sold for money.
No… You could have for all intents and purposes have recorded all your songs from the radio onto a cassette… That would be 100% legal for personal consumption… which would be what the ML authors are doing. ChatGPT and others could have sources information from published sources that are completely legit. No “Author” has provided any evidence otherwise yet to believe that ChatGPT and others have actually broken a law yet. For all we know the authors of these tools have library cards, and fed in screenshots of the digital scans of the book or hand scanned the book. Or didn’t even use the book at all and contextually grabbed a bunch of content from the internet at large.
Since the ML bots are all making derivative works, rather than spitting out original content… they’d be covered by copyright as a derivative work.
This only becomes an actual problem if you can prove that these tools have done BOTH
- obtain content in an illegal fashion
- provide the copyrighted content freely without fair-use or other protections.
A better comparison would probably be sampling. Sampling is fair use in most of the world, though there are mixed judgments. I think most reasonable people would consider the output of ChatGPT to be transformative use, which is considered fair use.
If I created a web app that took samples from songs created by Metallica, Britney Spears, Backstreet Boys, Snoop Dogg, Slayer, Eminem, Mozart, Beethoven, and hundreds of other different musicians, and allowed users to mix all these samples together into new songs, without getting a license to use these samples, the RIAA would sue the pants off of me faster than you could say “unlicensed reproduction.”
It doesn’t matter that the output of my creation is clear-cut fair use. The input of the app–the samples of copyrighted works–is infringing.
I asked Bing Chat for the 10th paragraph of the first Harry Potter book, and it gave me this:
“He couldn’t know that at this very moment, people meeting in secret all over the country were holding up their glasses and saying in hushed voices: ‘To Harry Potter – the boy who lived!’”
It looks like technically I might be able to obtain the entire book (eventually) by asking Bing the right questions?
Then this is a copyright violation - it violates any standard for such, and the AI should be altered to account for that.
What I’m seeing is people complaining about content being fed into AI, and I can’t see why that should be a problem (assuming it was legally acquired or publicly available). Only the output can be problematic.
No, the AI should be shut down and the owner should first be paying the statutory damages for each use of registered works of copyright (assuming all parties in the USA)
If they have a company left after that, then they can fix the AI.
Again, my point is that the output is what can violate the law, not the input. And we already have laws that govern fair use, rebroadcast, etc.
I think it’s not just the output. I can buy an image on any stock Plattform, print it on a T-Shirt, wear it myself or gift it to somebody. But if I want to sell T-Shirts using that image I need a commercial licence - even if I alter the original image extensivly or combine it with other assets to create something new. It’s not exactly the same thing but openAI and other companies certainly use copyrighted material to create and improve commercial products. So this doesn’t seem the same kind of usage an avarage joe buys a book for.
There is already a business model for compensating authors: it is called buying the book. If the AI trainers are pirating books, then yeah - sue them.
That’s part of the allegation, but it’s unsubstantiated. It isn’t entirely coherent.
It’s not entirely unsubstantiated. Sarah Silverman was able to get ChatGPT to regurgitate passages of her book back to her.
I don’t know if this holds water though. You don’t need to trail the AI on the book itself to get that result. Just on discussions about the book which for sure include passages on the book.
This is a good debate about copyright/ownership. On one hand, yes, the authors works went into ‘training’ the AI…but we would need a scale to then grade how well a source piece is good at being absorbed by the AI’s learning. for example. did the AI learn more from the MAD magazine i just fed it or did it learn more from Moby Dick? who gets to determine that grading system. Sadly musicians know this struggle. there are just so many notes and so many words. eventually overlap and similiarities occur. but did that musician steal a riff or did both musicians come to a similar riff seperately? Authors dont own words or letters so a computer that just copies those words and then uses an algo to write up something else is no more different than you or i being influenced by our favorite heroes or i formation we have been given. do i pay the author for reading his book? or do i just pay the store to buy it?
While I am rooting for authors to make sure they get what they deserve, I feel like there is a bit of a parallel to textbooks here. As an engineer if I learn about statics from a text book and then go use that knowledge to he’ll design a bridge that I and my company profit from, the textbook company can’t sue. If my textbook has a detailed example for how to build a new bridge across the Tacoma Narrows, and I use all of the same design parameters for a real Tacoma Narrows bridge, that may have much more of a case.
But you paid for the textbook
Libraries exist
I think that these are fiction writers. The maths you’d use to design that bridge is fact and the book company merely decided how to display facts. They do not own that information, whereas the Handmaid’s Tale was the creation of Margaret Atwood and was an original work.
It’s not really a parallel.
The text books don’t have copyrights on the concepts and formulae they teach. They only have copyrights for the actual text.
If you memorize the text book and write it down 1:1 (or close to it) and then sell that text you wrote down, then you are still in violation of the copyright.
And that’s what the likes of ChatGPT are doing here. For example, ask it to output the lyrics for a song and it will spit out the whole (copyrighted) lyrics 1:1 (or very close to it). Same with pages of books.
The memorization is closer to that of a fanatic fan of the author. It usually knows the beginning of the book and the more well known passages, but not entire longer works.
By now, ChatGPT is trying to refuse to output copyrighted materials know even where it could, and though it can be tricked, they appear to have implemented a hard filter for some more well known passages, which stops generation a few words in.
Have you tried just telling it to “continue”?
Somewhere in the comments to this post I posted screenshots of me trying to get lyrics for “We will rock you” from ChatGPT. It first just spat out “Verse 1: Buddy,” and ended there. So I answered with “continue”, it spat out the next line and after the second “continue” it gave me the rest of the lyrics.
Similar story with e.g. the first chapter of Harry Potter 1 and other stuff I tried. The output is often not perfect, with a few words being wrong, but it’s very clearly a “derived work” of the original. In the view of copyright law, changing a few words here is not a valid way of getting around copyrights.
Plagiarism filters frequently trigger on chatgpt written books and articles.
You have a point but there’s a pretty big difference between something like a statistics textbook and the novel “Dune” for instance. One was specifically written to teach mostly pre-existing ideas and the other was created as entertainment to sell to a wide an audience as possible.
Yea sure, right after Google and Amazon pay me for all the data they’ve stolen from me. LOL
All this copyright/AI stuff is so silly and a transparent money grab.
They’re not worried that people are going to ask the LLM to spit out their book; they’re worried that they will no longer be needed because a LLM can write a book for free. (I’m not sure this is feasible right now, but maybe one day?) They’re trying to strangle the technology in the courts to protect their income. That is never going to work.
Notably, there is no “right to control who gets trained on the work” aspect of copyright law. Obviously.
There is nothing silly about that. It’s a fundamental question about using content of any kind to train artificial intelligence that affects way more than just writers.
I seriously doubt Sarah Silverman is suing OpenAI because she’s worried ChatGPT will one day be funnier than she is. She just doesn’t want it ripping off her work.
What do you mean when you say “ripping off her work”? What do you think an LLM does, exactly?
In her case, taking elements of her book and regurgitating them back to her. Which sounds a lot like they could be pirating her book for training purposes to me.
Quoting someone’s book is not “ripping off” the work.
How is it able to quote the book? Magic?
So you’re saying that as long as they buy 1 copy of the book, it’s all good?
No, I’m not saying that. If she’s right and it can spit out any part of her book when asked (and someone else showed that it does that with Harry Potter), it’s plagiarism. They are profiting off of her book without compensating her. Which is a form of ripping someone off. I’m not sure what the confusion here is. If I buy someone’s book, that doesn’t give me the right to put it all online for free.
How do you know they didn’t just buy the book?
Again, that’s not relevant.
Designing and marketing a system to plagiarize works en masse? That’s the cash grab.
Can you elaborate on this concept of a LLM “plagiarizing”? What do you mean when you say that?
You know what would solve this? We all collectively agree this fucking tech is too important to be in the hands of a few billionaires, start an actual public free open source fully funded and supported version of it, and use it to fairly compensate every human being on Earth according to what they contribute, in general?
Why the fuck are we still allowing a handful of people to control things like this??
Setting aside the obvious answer of “because capitalism”, there are a lot of obstacles towards democratizing this technology. Training of these models is done on clusters of A100 GPU’s, which are priced at $10,000USD each. Then there’s also the fact that a lot of the progress being made is being done by highly specialized academics, often with the resources of large corporations like Microsoft.
Additionally the curation of datasets is another massive obstacle. We’ve mostly reached the point of diminishing returns of just throwing all the data at the training of models, it’s quickly becoming apparent that the quality of data is far more important than the quantity of the data (see TinyStories as an example). This means a lot of work and research needs to go into qualitative analysis when preparing a dataset. You need a large corpus of input, each of which are above a quality threshold, but then also as a whole they need to represent a wide enough variety of circumstances for you to reach emergence in the domain(s) you’re trying to train for.
There is a large and growing body of open source model development, but even that only exists because of Meta “leaking” the original Llama models, and now more recently releasing Llama 2 with a commercial license. Practically overnight an entire ecosystem was born creating higher quality fine-tunes and specialized datasets, but all of that was only possible because Meta invested the resources and made it available to the public.
Actually in hindsight it looks like the answer is still “because capitalism” despite everything I’ve just said.
I know the answer to pretty much all of our “why the hell don’t we solve this already?” questions is: capitalism.
But I mean, as Lrrr would say “why does the working class, as the biggest of the classes, doesn’t just eat the other one?”.
The short answer is friction. The friction of overcoming the forces of violence the larger class has at its disposal and utilizes at the smallest hint of uprising is greater than the friction of accepting the status quo.
The friction of accepting the status quo only seems to grow stronger though.
One would hope
Because we shy away from responsibility.
I think the longer response to this is more accurate. It’s more “because capitalism” than anything else.
And capitalism over the course of the 20th century made very successful attempts of alienating completely the working class and destroying all class consciousness or material awareness.
So people keep thinking that the problems is we as individuals are doing capitalism wrong. Not capitalism.
Because the tech behind it isn’t cheap and money does not fall from trees.
Obligatory xkcd: https://xkcd.com/827/
Isn’t learning the basic act of reading text? I’m not sure what the AI companies are doing is completely right but also, if your position is that only humans can learn and adapt text, that broadly rules out any AI ever.
Isn’t learning the basic act of reading text?
not even close. that’s not how AI training models work, either.
if your position is that only humans can learn and adapt text
nope-- their demands are right at the top of the article and in the summary for this post:
Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools
that broadly rules out any AI ever
only if the companies training AI refuse to pay
Isn’t learning the basic act of reading text?
not even close. that’s not how AI training models work, either.
Of course it is. It’s not a 1:1 comparison, but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.
Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of >> their copyrighted works in training artificial intelligence tools
Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.
What we’re broadly talking about is generative work. That is, by absorbing one a body of work, the model incorporates it into an overall corpus of learned patterns. That’s not materially different from how anyone learns to write. Even my use of the word “materially” in the last sentence is, surely, based on seeing it used in similar patterns of text.
The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.
There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models, and maybe Congress should remedy that, but on its face I don’t think it’s feasible to just shut it all down. Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.
Of course it is. It’s not a 1:1 comparison
no, it really isn’t–it’s not a 1000:1 comparison. AI generative models are advanced relational algorithms and databases. they don’t work at all the way the human mind does.
but the way generative AI works and the we incorporate styles and patterns are more similar than not. Besides, if a tensorflow script more closely emulated a human’s learning process, would that matter for you? I doubt that very much.
no, the results are just designed to be familiar because they’re designed by humans, for humans to be that way, and none of this has anything to do with this discussion.
Having to individually license each unit of work for a LLM would be as ridiculous as trying to run a university where you have to individually license each student reading each textbook. It would never work.
nobody is saying it should be individually-licensed. these companies can get bulk license access to entire libraries from publishers.
That’s not materially different from how anyone learns to write.
yes it is. you’re just framing it in those terms because you don’t understand the cognitive processes behind human learning. but if you want to make a meta comparison between the cognitive processes behind human learning and the training processes behind AI generative models, please start by citing your sources.
The difference is that a human’s ability to absorb information is finite and bounded by the constraints of our experience. If I read 100 science fiction books, I can probably write a new science fiction book in a similar style. The difference is that I can only do that a handful of times in a lifetime. A LLM can do it almost infinitely and then have that ability reused by any number of other consumers.
this is not the difference between humans and AI learning, this is the difference between human and computer lifespans.
There’s a case here that the renumeration process we have for original work doesn’t fit well into the AI training models
no, it’s a case of your lack of imagination and understanding of the subject matter
and maybe Congress should remedy that
yes
but on its face I don’t think it’s feasible to just shut it all down.
nobody is suggesting that
Something of a compulsory license model, with the understanding that AI training is automatically fair use, seems more reasonable.
lmao
You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.
First of all, copyright law does not care about the algorithms used and how well they map what a human mind does. That’s irrelevant. There’s nothing in particular about copyright that applies only to humans but not to machines. Either a work is transformative or it isn’t. Either it’s derivative of it isn’t.
What AI is doing is incorporating individual works into a much, much larger corpus of writing style and idioms. If a LLM sees an idiom used a handful of times, it might start using it where the context fits. If a human sees an idiom used a handful of times, they might do the same. That’s true regardless of algorithm and there’s certainly nothing in copyright or common sense that separates one from another. If I read enough Hunter S Thompson, I might start writing like him. If you feed an LLM enough of the same, it might too.
Where copyright comes into play is in whether the new work produced is derivative or transformative. If an entity writes and publishes a sequel to The Road, Cormac McCarthy’s estate is owed some money. If an entity writes and publishes something vaguely (or even directly) inspired by McCarthy’s writing, no money is owed. How that work came to be (algorithms or human flesh) is completely immaterial.
So it’s really, really hard to make the case that there’s any direct copyright infringement here. Absorbing material and incorporating it into future works is what the act of reading is.
The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it. I can only buy and read so many books in my lifetime, and I can only produce so much content. The same is not true for an LLM, so there is a case that Congress should charge them differently for using copyrighted works, but the idea that OpenAI should have to go to each author and negotiate each book would really just shut the whole project down. (And no, it wouldn’t be directly negotiated with publishers, as authors often retain the rights to deny or approve licensure).
You’re getting lost in the weeds here and completely misunderstanding both copyright law and the technology used here.
you’re accusing me of what you are clearly doing after I’ve explained twice how you’re doing that. I’m not going to waste my time doing it again. except:
Where copyright comes into play is in whether the new work produced is derivative or transformative.
except that the contention isn’t necessarily over what work is being produced (although whether it’s derivative work is still a matter for a court to decide anyway), it’s regarding that the source material is used for training without compensation.
The problem is that as a consumer, if I buy a book for $12, I’m fairly limited in how much use I can get out of it.
and, likewise, so are these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.
these companies who have been using copyrighted material - without compensating the content creators - to train their AIs.
That wouldn’t be copyright infringement.
It isn’t infringement to use a copyrighted work for whatever purpose you please. What’s infringement is reproducing it.
It isn’t infringement to use a copyrighted work for whatever purpose you please.
and you accused me of “completely misunderstanding copyright law” lmao wow
It’s infringement to use copyrighted material for commercial purposes.
Okay, given that AI models need to look over hundreds of thousands if not millions of documents to get to a decent level of usefulness, how much should the author of each individual work get paid out?
Even if we say we are going to pay out a measly dollar for every work it looks over, you’re immediately talking millions of dollars in operating costs. Doesn’t this just box out anyone who can’t afford to spend tens or even hundreds of millions of dollars on AI development? Maybe good if you’ve always wanted big companies like Google and Microsoft to be the only ones able to develop these world-altering tools.
Another issue, who decides which works are more valuable, or how? Is a Shel Silverstein book worth less than a Mark Twain novel because it contains less words? If I self publish a book, is it worth as much as Mark Twains? Sure his is more popular but maybe mine is longer and contains more content, whats my payout in this scenario?
i admit it’s a hug issue, but the licensing costs are something that can be negotiated by the license holders in a structured settlement.
moving forward, AI companies can negotiate licensing deals for access to licensed works for AI training, and authors of published works can decide whether they want to make their works available to AI training (and their compensation rates) in future publishing contracts.
the solutions are simple-- the AI companies like OpenAI, Google, et al are just complaining because they don’t want to fork over money to the copyright holders they ripped off and set a precedent that what their doing is wrong (legally or otherwise).
Sure, but what I’m asking is: what do you think is a reasonable rate?
We are talking data sets that have millions of written works in them. If it costs hundreds or thousands per work, this venture almost doesn’t make sense anymore. If its $1 per work, or cents per work, then is it even worth it for each individual contributor to get $1 when it adds millions in operating costs?
In my opinion, this needs to be handled a lot more carefully than what is being proposed. We are potentially going to make AI datasets wayyyy too expensive for anyone to use aside from the largest companies in the market, and even then this will cause huge delays to that progress.
If AI is just blatantly copy and pasting what it read, then yes, I see that as a huge issue. But reading and learning from what it reads, no matter how rudimentary that “learning” may be, is much different than just copying works.
that’s not for me to decide. as I said, it is for either the courts to decide or for the content owners and the AI companies to negotiate a settlement (for prior infringements) and a negotiated contracted amount moving forward.
also, I agree that’s it’s a massive clusterfuck that these companies just purloined a fuckton of copyrighted material for profit without paying for it, but I’m glad that they’re finally being called out.
Dude, they said
If AI is just blatantly copy and pasting what it read, then yes, I see that as a huge issue.
That’s in no way agreeing “that’s it’s a massive clusterfuck that these companies just purloined a fuckton of copyrighted material for profit without paying for it”. Do you not understand that AI is not just copy and pasting content?
Removed by mod
Doesn’t this just box out anyone who can’t afford to spend tens or even hundreds of millions of dollars on Al development?
The government could allow the donation of original art for the purpose of tech research to be a tax write-off, and then there can be non-profits that work between artists and tech developers to collect all the legally obtained art, and grant access to those that need it for projects
That’s just one option off the top of my head, which I’m sure would have some procedural obstacles, and chances for problems to be baked in, but I’m sure there are other options as well.
AI isn’t doing anything creative. These tools are merely ways to deliver the information you put into it in a way that’s more natural and dynamic. There is no creation happening. The consequence is that you either pay for use of content, or you’ve basically diminished the value of creating content and potentiated plagiarism at a gargantuan level.
Being that this “AI” doesn’t actually have the capacity for creativity, if actual creativity becomes worthless, there will be a whole lot less incentive to create.
The “utility” of it right now is being created by effectively stealing other people’s work. Hence, the court cases.
Please first define “creativity” without artificially restricting it to humans. Then, please explain how AI isn’t doing anything creative.
deleted by creator
Sure, AI is not doing anything creative, but neither is my pen, its the tool im using to be creative. Lets think about this more with some scenarios:
Lets say software developer “A” comes along, and they’re pretty fucking smart. They sit down, read through all of Mark Twains novels, and over the course of the next 5 years, create a piece of software that generates works in Twain’s style. Its so good that people begin using it to write real books. It doesn’t copy anything specifically from Twain, it just mimics his writing style.
We also have developer “B”. While Dev A is working on his project, Dev B is working on a very similar project, but with one difference: Dev B writes an LLM to read the books for him, and develop a writing style similar to Twain’s based off of that. The final product is more or less the same as Dev A’s product, but he saves himself the time of needing to read through every work on his own, he just reads a couple to get an idea of what the output might look like.
Is the work from Dev A’s software legitimate? Why or why not?
Is the work from Dev B’s software legitimate? Why or why not?
Assume both of these developers own copies of the works they used as training data, what is honestly the difference here? This is what I am struggling with so much.
Both developers have created a parrot tool. A utility to plagiarise a style.
So now the output of both programs is “illegimate” in your eyes, despite one of them never even getting direct access to the original text.
Now lets say one of them just writes a story in the style of Twain, still plagiarism? Because I don’t know if you can copyright a style.
The first painter painted on cave walls with his fingers. Was the brush a parrot tool? A utility to plagiarize? You could use it for plagiarism, yes, and by your logic, it shouldn’t be used. And any work created using it is not “legitimate”.
Okay, given that AI models need to look over hundreds of thousands if not millions of documents to get to a decent level of usefulness, how much should the author of each individual work get paid out?
Congress has been here before. In the early days of radio, DJs were infringing on recording copyrights by playing music on the air. Congress knew it wasn’t feasible to require every song be explicitly licensed for radio reproduction, so they created a compulsory license system where creators are required to license their songs for radio distribution. They do get paid for each play, but at a rate set by the government, not negotiated directly.
Another issue, who decides which works are more valuable, or how? Is a Shel Silverstein book worth less than a Mark Twain novel because it contains less words? If I self publish a book, is it worth as much as Mark Twains? Sure his is more popular but maybe mine is longer and contains more content, whats my payout in this scenario?
I’d say no one. Just like Taylor Swift gets the same payment as your garage band per play, a compulsory licensing model doesn’t care who you are.
Why is any of that the author’s problem
A key point is that intellectual property law was written to balance the limitations of human memory and intelligence, public interest, and economic incentives. It’s certainly never been in perfect balance. But the possibility of a machine being able to consume enormous amounts of information in a very short period of time has never been a variable for legislators. It throws the balance off completely in another direction.
There’s no good way to resolve this without amending both our common understanding of how intellectual property should work and serve both producers and consumers fairly, as well as our legal framework. The current laws are simply not fit for purpose in this domain.
I very much agree.
Someone should AGPL their novel and force the AI company to open source their entire neural network.
What did you pay the author of the books and papers published that you used as sources in your own work? Do you pay those authors each time someone buys or reads your work? At most you pay $0-$15 for a book anyway.
In regards to free advertising when your source material is used… if your material is a good source and someone asks say ChatGPT, shouldn’t your work be mentioned if someone asks for a book or paper and you have written something useful for it? Assuming it doesn’t hallucinate.
That’s the “paid in exposure” argument.
And I’m not sure what my company pays, but they purchase access to scientific papers and industrial standards. The market price I’ve seen for them is hundreds of dollars. You either pay an ongoing subscription to access the information, or you pay a larger lump sum to own a copy that cannot legally be reproduced.
Companies pay for this sort of thing. AI shouldn’t get an exception.
TBF, access to scientific papers funded by public money should be free to the public anyway. The whole needing a subscription to access them is malarkey. The researchers aren’t the ones getting the money.
This needs to be signal boosted, regarding researchers, research, and money.