I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.
I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.
Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD
Would you suggest the K9 instead of the K8?