• 1 Post
  • 110 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • It depends!

    Exllamav2 was pretty fast on AMD, exllamav3 is getting support soon. Vllm is also fast AMD. But its not easy to setup; you basically have to be a Python dev on linux and wrestle with pip. Or get lucky with docker.

    Base llama.cpp is fine, as are forks like kobold.cpp rocm. This is more doable without so much hastle.

    The AMD framework desktop is a pretty good machine for large MoE models. The 7900 XTX is the next best hardware, but unfortunately AMD is not really interested in competing with Nvidia in terms of high VRAM offerings :'/. They don’t want money I guess.

    And there are… quirks, depending on the model.


    I dunno about Intel Arc these days, but AFAIK you are stuck with their docker container or llama.cpp. And again, they don’t offer a lot of VRAM for the $ either.


    NPUs are mostly a nothingburger so far, only good for tiny models.


    Llama.cpp Vulkan (for use on anything) is improving but still behind in terms of support.


    A lot of people do offload MoE models to Threadripper or EPYC CPUs, via ik_llama.cpp, transformers or some Chinese frameworks. That’s the homelab way to run big models like Qwen 235B or deepseek these days. An Nvidia GPU is still standard, but you can use a 3090 or 4090 and put more of the money in the CPU platform.


    You wont find a good comparison because it literally changes by the minute. AMD updates ROCM? Better! Oh, but something broke in llama.cpp! Now its fixed an optimized 4 days later! Oh, architecture change, not it doesn’t work again. And look, exl3 support!

    You can literally bench it in a day and have the results be obsolete the next, pretty often.





  • Qwen3-235B-A22B-FP8

    Good! An MoE.

    Ideally its maxium context lenght of 131K but i’m willing to compromise.

    I can tell you from experience all Qwen models are terrible past 32K. What’s more, going over 32K, you have to run them in a special “mode” (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.

    Also, you lose a lot of quality with FP8/AWQ quantization unless it’s native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much higher quality, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp’s is good, and exllama’s is excellent, making it less than ideal for >16K. Its niche is more highly parallel, low context size serving.

    My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090

    Honestly, you should be set now. I can get 16+ t/s with high context Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)

    It is poorly documented through. The general strategy is to keep the “core” of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There’s even a project to try and calculate it automatically:

    https://github.com/k-koehler/gguf-tensor-overrider

    IK_llama.cpp can also use special GGUFs regular llama.cpp can’t take, for faster inference in less space. I’m not sure if one for 235B is floating around huggingface, I will check.


    Side note: I hope you can see why I asked. The web of engine strengths/quirks is extremely complicated, heh, and the answer could be totally different for different models.


  • Be specific!

    • What models size (or model) are you looking to host?

    • At what context length?

    • What kind of speed (token/s) do you need?

    • Is it just for you, or many people? How many? In other words should the serving be parallel?

    In other words, it depends, but the sweetpsot option for a self hosted rig, OP, is probably:

    • One 5090 or A6000 ADA GPU. Or maybe 2x 3090s/4090s, underclocked.

    • A cost-effective EPYC CPU/Mobo

    • At least 256 GB DDR5

    Now run ik_llama.cpp, and you can serve Deepseek 671B faster than you can read without burning your house down with H200s: https://github.com/ikawrakow/ik_llama.cpp

    It will also do for dots.llm, kimi, pretty much any of the mega MoEs de joure.

    But there’s all sorts of niches. In a nutshell, don’t think “How much do I need for AI?” But “What is my target use case, what model is good for that, and what’s the best runtime for it?” Then build your rig around that.






  • Yeah, just paying for LLM APIs is dirt cheap, and they (supposedly) don’t scrape data. Again I’d recommend Openrouter and Cerebras! And you get your pick of models to try from them.

    Even a framework 16 is not good for LLMs TBH. The Framework desktop is (as it uses a special AMD chip), but it’s very expensive. Honestly the whole hardware market is so screwed up, hence most ‘local LLM enthusiasts’ buy a used RTX 3090 and stick them in desktops or servers, as no one wants to produce something affordable apparently :/






  • I don’t understand.

    Ollama is not actually docker, right? It’s running the same llama.cpp engine, it’s just embedded inside the wrapper app, not containerized. It has a docker preset you can use, yeah.

    And basically every LLM project ships a docker container. I know for a fact llama.cpp, TabbyAPI, Aphrodite, Lemonade, vllm and sglang do. It’s basically standard. There’s all sorts of wrappers around them too.

    You are 100% right about security though, in fact there’s a huge concern with compromised Python packages. This one almost got me: https://pytorch.org/blog/compromised-nightly-dependency/

    This is actually a huge advantage for llama.cpp, as it’s free of python and external dependencies by design. This is very unlike ComfyUI which pulls in a gazillian external repos. Theoretically the main llama.cpp git could be compromised, but it’s a single, very well monitored point of failure there, and literally every “outside” architecture and feature is implemented from scratch, making it harder to sneak stuff in.


  • OK.

    Then LM Studio. With Qwen3 30B IQ4_XS, low temperature MinP sampling.

    That’s what I’m trying to say though, there is no one click solution, that’s kind of a lie. LLMs work a bajillion times better with just a little personal configuration. They are not magic boxes, they are specialized tools.

    Random example: on a Mac? Grab an MLX distillation, it’ll be way faster and better.

    Nvidia gaming PC? TabbyAPI with an exl3. Small GPU laptop? ik_llama.cpp APU? Lemonade. Raspberry Pi? That’s important to know!

    What do you ask it to do? Set timers? Look at pictures? Cooking recipes? Search the web? Look at documents? Do you need stuff faster or accurate?

    This is one reason why ollama is so suboptimal, with the other being just bad defaults (Q4_0 quants, 2048 context, no imatrix or anything outside GGUF, bad sampling last I checked, chat template errors, bugs with certain models, I can go on). A lot of people just try “ollama run” I guess, then assume local LLMs are bad when it doesn’t work right.



  • brucethemoose@lemmy.worldtoSelfhosted@lemmy.worldI've just created c/Ollama!
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    edit-2
    21 days ago

    TBH you should fold this into localllama? Or open source AI?

    I have very mixed (mostly bad) feelings on ollama. In a nutshell, they’re kinda Twitter attention grabbers that give zero credit/contribution to the underlying framework (llama.cpp). And that’s just the tip of the iceberg, they’ve made lots of controversial moves, and it seems like they’re headed for commercial enshittification.

    They’re… slimy.

    They like to pretend they’re the only way to run local LLMs and blot out any other discussion, which is why I feel kinda bad about a dedicated ollama community.

    It’s also a highly suboptimal way for most people to run LLMs, especially if you’re willing to tweak.

    I would always recommend Kobold.cpp, tabbyAPI, ik_llama.cpp, Aphrodite, LM Studio, the llama.cpp server, sglang, the AMD lemonade server, any number of backends over them. Literally anything but ollama.


    …TL;DR I don’t the the idea of focusing on ollama at the expense of other backends. Running LLMs locally should be the community, not ollama specifically.