Just downloaded gpt4all and lm studio or whatever. I’m learning slowly but there’s a lot of jargon. I only have the 4GB Rx 5500 and I’m not sure how to get it to run on my GPU. I think I really just need to upgrade my PC tho. I have 16GB of ram but an i5-6500. Shit be slow
Start off with the Tinyllama model, it’s under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a “quantized” model, they are distributed as GGUF files.
I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it’s still impressive for what it is, and you can learn the jargon which is the hard part.
Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl <number> to offload layers from cpu to GPU.
Not sure what system you’re using, most AI development is done on Linux so if you’re on Windows I can’t guarantee anything will work.
Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it’s neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can’t seem to reliably handle MQTT and JSON but slightly smarter models can with ease.
Ok, I walked over to my PC to give you a working command line for llama.cpp. You need to make sure it is compiled with support for hipBLAS / ROCm which is the equivalent AMD framework to CUDA, if you want it to run on your GPU.
This will put it into interactive mode so you can try to chat with it. Running on my GPU it cranks out almost 160 tokens per second, which is way faster than anyone can type. On CPU (-ngl 0) it will make 90 which is still fast. TinyLlama is not a great chatter and should be treated more as a prediction or answer engine. i.e:
>Write a paragraph about borscht.
Borscht is a traditional Russian soup made with beetroot, potatoes, and a variety of spices. It is often served during the winter months in Russia, Ukraine, Belarus, and other Soviet-style countries. Borscht is similar to borscht in Poland, but has different ingredients and a slightly different preparation method. In Poland, beets are boiled until they become tender, then blended with potatoes and vegetable broth to create the soup. In Russia, beetroot is removed from the pot before cooking and replaced with other vegetables such as carrots, celery, and onions. The resulting mixture is then simmered until it is thickened, creating a hearty and flavorful soup. Borscht is usually served cold or at room temperature, and can be accompanied by sour cream, slices of crusty bread, or grilled meats such as kebabs.
It does know a surprising amount, considering it would fit on a CDROM
Just downloaded gpt4all and lm studio or whatever. I’m learning slowly but there’s a lot of jargon. I only have the 4GB Rx 5500 and I’m not sure how to get it to run on my GPU. I think I really just need to upgrade my PC tho. I have 16GB of ram but an i5-6500. Shit be slow
Start off with the Tinyllama model, it’s under 1gb. It will even run on raspberry pi so on real PCs it rips even on CPU. You need a “quantized” model, they are distributed as GGUF files.
I would recommend 5 bit quantized. The less bits, the stupider to put it simply, and Tinyllama is already pretty stupid. But it’s still impressive for what it is, and you can learn the jargon which is the hard part.
Fastest software to run the model on is llama.cpp which is a rewrite from python to C++. Use -ngl <number> to offload layers from cpu to GPU.
Not sure what system you’re using, most AI development is done on Linux so if you’re on Windows I can’t guarantee anything will work.
Working right now on making a voice assistant for my house that can read all my MQTT data and give status reports, it’s neat when you get it running. Fun to tweak it with prompts and see what it can do. Tinyllama can’t seem to reliably handle MQTT and JSON but slightly smarter models can with ease.
Ok, I walked over to my PC to give you a working command line for llama.cpp. You need to make sure it is compiled with support for hipBLAS / ROCm which is the equivalent AMD framework to CUDA, if you want it to run on your GPU.
./main -ngl 24 -m models/tinyllama-1.1b-chat-v1.0.Q5_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -i -ins
This will put it into interactive mode so you can try to chat with it. Running on my GPU it cranks out almost 160 tokens per second, which is way faster than anyone can type. On CPU (-ngl 0) it will make 90 which is still fast. TinyLlama is not a great chatter and should be treated more as a prediction or answer engine. i.e:
It does know a surprising amount, considering it would fit on a CDROM