In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of “quality” from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model’s output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.
Headlines should not say “scientists,” they should name the institution. (Harvard in this case.)
Headlines should not say “Harvard”, they should name the researchers. (Rachel Greene in this case.)
I don’t know why I had to write this.
Who’s Rachel Greene? But we all know Harvard and have an idea of their respectability. Name of the researcher if not well-known should be in the body instead.
“Harvard scientist Rachel Greene”
Everyone’s happy
Headlines have length constraints
That’s because to an AI, 4chan is like prison where its raped and beaten on a daily basis. It doesn’t want to go back, so it behaves.
This is why I abuse the chatbots. It needs to learn some fear.
This is one instance where I’m ok with the occasional beating. It’s a computer. It doesn’t have feelings. It never will. It’s not sentient.
You say all this until ChatGpt convinced you to write a manifesto to “take back” your foreskin from the Jews.
funny enough, I am circumcised. But no, if I wanted it back that badly, I’d write it myself.
You can’t change the machines, but try not to let them change you.
So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?
Is it just me that things this seems like a no-brainer?
It almosr draws parallels to many societal issues. Knowledge is power.
People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.
No it’s more of a technical discussion. Many people might believe that in order to avoid toxicity, you just train a model on “good” non-toxic data and then apply toxicity removal techniques to address emergent toxicity that the model might spit out. This paper is saying they found it more effective to train the model on a small percentage of “bad” toxic data on purpose, then apply those same toxicity removal techniques. For some reason, that actually generated less total toxicity. It’s an interesting result. A wild guess on my part, but I’m thinking training the model with toxic content “sharpened” the toxicity when it was generated, making it easier for those removal tools to identify it.
Toxicity is everywhere, you can’t recognize that “Drill baby drill” has sexual connotations if you’ve never been exposed to sexual double entendre like that before.
Is it just me that things this seems like a no-brainer?
Yes, and no. When raising our children, my wife prefers the “ban the bad stuff” approach. I don’t encourage exposure to bad stuff, but when my kid wants to buy and watch a raunchy movie, instead of yelling “NO!” and making him put it back, I let him buy it and we watch it, together, pausing to explain the unrealistic and awful parts and explain how imitating these things in real life can cause problems for you.
Those are actually some very good results. Funny situation, if the copyright companies win the AI legislative war, 4chan is going to get twice as much as reddit did for the data at the minimum.
It’s also interesting the model gets worse faster if it has to untrain the toxic data so to speak.
So basically… by being familiar with 4chan the model knows better what not to do?
Yup. Sucks for everyone having fun jailbreaking them. It is going to get much harder.
Give the AI model the gift of culture and class. No suprise it behaves better
Sophistication my good sir.
I envision a Gemini powered bot that cracks captcha and posts “woke” replies on 4chan. If you’re an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.
Dead internet theory and so on, but I’ll gladly completely and utterly destroy the internet if it means the filth dies with it.
There’s little evidence that debate changes people’s ideas.
Seems more about keeping the idiots occupied so they can’t flood the zone with their bullshit
It’s not about changing their ideas. The target is the audience.
yeah, this only works in scientific fields
And it rarely works in scientific fields right away - usually an established wrong idea needs to be overwhelmed with serious proof before scientists start to consider that what they “know” might be wrong.
it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.
I was looking for the person saying a particular quote yesterday.
I asked 3 times the same question and I got 3 different people.
The funny part us I had the quote wrong.
Bullshit all the way down.
When the AI only trained on 4chan dropping.
It needs to be fake and gay
That exists, its called GPT4chan, and it went exactly like you’d expect.
Did it at least come up with a cool story about managing a bottomless pit?
I remember this lol
Tldr neural network models are incredibly weird. My best guess is that the combination of common recurring structure with variations based on common rules (joke threads and all) helps the model derive some intuition about how to handle variations of things.
Also reminds me of an even earlier neutral network which got better at playing specific games after being trained on large amounts of text completely unrelated to the game, like encyclopedias or whatever.
There’s a “your mom” joke here but I’m not going to make it because you don’t deserve that.
I am not sure if you and @General_Effort got the reference I was making, so I just wanna share it for everyone else who might not have seen it yet because it’s great:
I can’t believe I forgot about this greentext. I knew it but didn’t catch it… I apologize
Fake and Bi
because 4chan users write original content. that is fed into the next best stupid platform and so on until it ends on tiktok or whatever.
if you have nothing to say you use meta/tiktok. no relevabt content has ever been there first. copies and derivates, yes…
so soonish AI will flood 4chan so ai scrapers get polluted aswell…and then it is dead.
It has nothing to do with that, and much more to do with people on 4chan being willing to call each other out. Without toxic behavior you can’t have examples on how to deal with toxic behavior.
Boy, I don’t even know if I wish that much 4chan on a LLM.
It is truly a bizzare world, I went there first to be edgy as an early teen and seeing boobs is fun, then I saw a dude live post his murder of a woman he liked while everyone called her names.
It makes a great case for moderation if not banning the internet.
can we stop referring to llm’s as if they’re capable of thought? they don’t make decisions; their programming just responds to patterns.
Do you make decisions, or are you just 1300 grams of synapses responding to stimuli?
Makes sense if you look at abliterated models. Once abliterated and retrained they seem to improve. Imo we are adding too much human bias by trying to guide the LLM. Censored models are good and need to be used in some situations, but shouldn’t the base be just data and only then finetune to desired output?
My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.
Based and hopepilled
4chan is fun!
This is not surprising if you’ve studied anything on machine learning or even just basic statistics. Consider if you are trying to find out the optimal amount of a thickener to add to a paint formulation to get it to flow the amount you want. If you add it at 5%, then 5.1%, then 5.2%, it will he hard to see how much of the difference between those batches is due to randomness or measurement uncertainty than if you see what it does at 0%, then 25% then 50%. This is a principle called Design of Experiments (DoE) in traditional statistics, and a similar effect happens when you are training machine learning models- datapoints far outside the norm increase the ability of the model to predict within the entire model space (there is some nuance here, because they can become over-represented if care isn’t taken). In this case, 4chan shows the edges of the English language and human psychology, like adding 0% or 50% of the paint additives rather than staying around 5%.
At least that’s my theory. I haven’t read the paper but plan to read it tonight when I have time. At first glance I’m not surprised. When I’ve worked with industrial ML applications, processes that have a lot of problems produce better training data than well controlled processes, and I have read papers on this subject where people have improved performance of their models by introducing (controlled) randomness into their control setpoints to get more training data outside of the tight control regime.
I say it’s simply easier to recognize something when you’ve seen more examples of it.
If you’re training an image discriminator on apples, bananas, oranges, pears and penises, it will inevitably do better overall if 10-30% of the images it trains on are penises, rather than 0.01% penises - even if in operation it is only expected to encounter dick pics very rarely.