• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • I’m also on p2p 2x3090 with 48GB of VRAM. Honestly it’s a nice experience, but still somewhat limiting…

    I’m currently running deepseek-r1-distill-llama-70b-awq with the aphrodite engine. Though the same applies for llama-3.3-70b. It works great and is way faster than ollama for example. But my max context is around 22k tokens. More VRAM would allow me more context, even more VRAM would allow for speculative decoding, cuda graphs, …

    Maybe I’ll drop down to a 35b model to get more context and a bit of speed. But I don’t really want to justify the possible decrease in answer quality.


  • I’m running such a setup!

    This is my nixos config, though feel free to ignore it, since it’s optmized for me and not others.

    How did I achieve your described setup?

    • nixos + flakes & colmena: Sync system config & updates
    • impermanence through btrfs snapshots: destroy all non-declarative state between reboots to avoid drift between systems
    • syncthing: synchronise ALL user files between systems (at least my server is always online to reduce sync inconsistencies from only having a single device active at the time)
    • rustic: hourly backups from all devices to the same repos, since this is deduplicated and my systems are mostly synchronised, I have a very clear record of my file histories



  • Thanks for the writeup! So far I’ve been using ollama, but I’m always open for trying out alternatives. To be honest, it seems I was oblivious to the existence of alternatives.

    Your post is suggesting that the same models with the same parameters generate different result when run on different backends?

    I can see how the backend would have an influence hanfling concurrent api calls, ram/vram efficiency, supported hardware/drivers and general speed.

    But going as far as having different context windows and quality degrading issues is news to me.



  • yes: sntx.space, check out the spurce button in the bottom right corner.

    I’m building/running it the homebrewed-unconventional route. That is I have just a bit of html/css and other files I want to serve, then I use nix to build that into a usable website and serve it on one of my homelab machines via nginx. That is made available through a VPS running HA-Proxy and its public IP. The Nebula overlay network (VPN) connects the two machines.





  • I’m suprised nobody mentioned nebula: A scalable overlay networking tool with a focus on performance, simplicity and security.

    I’ve been running it for about two years on multiple machines and it worked flawlessly so far. Even connecting two hosts, both behind mullvad-vpn tunnels.

    The only downside is, that you have to host your own discovery server (callled “lighthouses”). One is fine, but running at least two removes the single point of failure from the network.