Have you read ai stories? They are shit. The current ai doesn’t understand the arc that makes a story
That’s not the point of this I think lol. It’s very impressive as a tech demo, that even a device even as underpowered as a Pi can run these AI models to a passable degree.
That’s what I meant. Ai stories are not passable and I think I think if we give them to people who don’t know how stories work (children) we are in for a bad time
I could see this going hilariously wrong
Or tragically wrong.
I would not want a machine with no moral compass whatsoever telling “stories” to a toddler.
Hi, Susie. Have you ever heard of the Texas Chainsaw Massacre? Columbine? BTK?
I mean, have you checked kids videos on YouTube? I remember getting dumbfounded when I watched some of the “stories”. LLM would fit right in.
Can’t wait to encounter Osama Bin Laden Finger Family song.
So I heard you like generic and predictable stories…
Very Diamond Age.
Still waiting for my skull gun.
I’m surprised that the Pi can even run Stable Diffusion.
More likely running on servers
Article clearly stated it’s running locally
Which is bullshit because the pi categorically can not run do that. More than likely he’s running stable diffusion locally in the network though
Edit: I’m an asshole, and forking impressed.
Nope, running locally on the Pi.
Well damn thank you for setting me straight. Impressive tbh. I am shocked stable diffusion xl runs on the pi 5.
The article does say it takes five minutes to create a new story and picture. I assume most of that time is spent generating the picture. Still pretty impressive, but nowhere near the few seconds you can get with fast hardware.
Boy, are the example story and picture bad.
Yeah, maybe.
I have what is probably a stupid and misplaced question. The second picture in the article has the phrase “with hope in his heart”. That phrase repeatedly pops up in the hilariously bad ChatGPT stories I’ve seen people generate.
Is there a reason that cheesy phrases that don’t get used in real life keep popping into stories like that?
Those phrases are not common anymore but once was very common, among the corpus the llm is trained on (mid 20th century books)
I want to preface this by saying I’m not doubting you, I just don’t know how it works.
Ok, but wouldn’t the training be weighted against older phrases that are no longer used? Or is all training data given equal weight?
Additionally, if the goal is to create bedtime stories or similar, couldn’t the person generating it ask for a more contemporary style? Would that affect the use of that phrase and similar cheesy lines that keep appearing?
I would never use an LLM for creative or factual work, but I use them all the time for code scaffolding, summarization, and rubber ducking. I’m super interested and just don’t understand why they do the things they do.
Removed by mod
Here is an alternative Piped link(s):
https://piped.video/LCPhbN1l024
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.