• DarkThoughts@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    I tried oobabooga and it basically always crashes when I try to generate anything, no matter what model I try. But honestly, as far as I can tell all the good models require absurd amounts of vram, much more than consumer cards have, so you’d need at least like a small gpu server farm to local host them reliably yourself. Unless of course you want like practically nonexistent context sizes.

    • exu@feditown.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.