In the next ~6 months I’m going to entirely overhaul my setup. Today I have a NUC6i3 running Home Assistant OS, and a NUC8i7 running OpenMediaVault with all the usual suspects via Docker.

I want to upgrade hardware significantly, partially because I’d like to bring in some local LLM. Nothing crazy, 1-8B models hitting 50tps would make me happy. But even that is going to mean a beefy machine compared to today, which will be nice for everything else too of course.

I’m still all over the place on hardware, part of what I’m trying to decide is whether to go with a single machine for everything or keep them separate.

Idea 1 is a beefy machine and Proxmox with HA in a VM, OMV or TrueNAS in another, and maybe a 3rd straight Debian to separate all the Docker stuff. But I don’t know if I want to add the complexity.

Idea 2 would be beefy machine for straight OMV/TrueNAS and run most stuff there, and then just move HA over to the existing i7 for more breathing room (mostly for Frigate, which could also separate to other machine I guess).

I hear a lot of great things about Proxmox, but I’m not sold that it’s worth the new complexity for me. And keeping HA (which is “critical” compared to everything else) separated feels like a smart choice. But keeping it on aging hardware diminishes that anyway, so I don’t know.

Just wanting to hear various opinions I guess.

  • EpicFailGuy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 hours ago

    The one factor that no one seems to have mentioned yet that is key for many of us is LEARNING …

    It’s a great way to learn virtualization and containerization

    I use it exclusively to run Linux containers, it makes it very convenient to backup and restore as well as replicate environments.

    We are now migrating our lab at work away from VMW

  • polle@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    I need do update my hardware and thought about switching to proxmox, because of all the good things i hear about it. Iam currently on unraid, but this thing still runs and its the same installation of 7 years ago. It had zero downtime. Mutliple drives, vms and docker container. Easy to use and rock solid.

  • melfie@lemy.lol
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    I shy away from VMs because I prefer having a pool of resources on a machine that can be used as needed instead of being pre-allocated. Pre-allocating CPU, RAM, and doing PCI passthough for GPUs wastes already limited resources and is extra effort. Yes, the best practice for production k8s is setting resource requests and limits, but it’s not something I want to bother with when I only have one server.

  • jubilationtcornpone@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    12 hours ago

    I use Proxmox for Work and Hyper-V at home. Looking forward to retiring my old Hyper-V host and replace it with Proxmox because Hyper-V is a pain.

    Virtualization really helps with reliability. In particular, by allowing you to quickly take snapshots before doing anything destructive and by streamlining backup and recovery.

  • TunaLobster@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    12 hours ago

    I did it purely so I could fully back up my server VM and move it to new hardware when I wanted to upgrade. I just have to install Proxmox, attach the NAS, and pull the VM backup. And just like that everything is back to running just as it was before the upgrade! Now just faster and more energy efficient!

  • sem@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    13 hours ago

    Don’t add a layer of abstraction until you need it, or you have the free time to learn it well enough that it won’t cause you problems while you experiment.

  • muusemuuse@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    10 hours ago

    Do you need clusters that can failure ver from one machine to another? Is yes, proxmox is good. If no, there are less complex options.

  • dbtng@eviltoast.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    18 hours ago

    I use PVE professionally. I could spent some time bitching about how it handles ssh keys and the fragile corosync cluster management. I could complain about the sloppy release cycle and the way they move fast and break shit. Or all the janky shit they’ve slapped together in PBS. I could go on.

    But I actually pay for a license for my homelab. And ya, it is THE thing at work now.

    I’ve often heard it said that Proxmox isn’t a great option. But its the best one.
    If you do try it, don’t bother asking questions here.
    Go to the source. https://forum.proxmox.com/

    • tmjaea@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?

      • dbtng@eviltoast.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 hours ago

        SSH key management in PVE is handled in a set of secondary files, while the original debian files are replaced with symlinks. Well, that’s still debian. And in some circumstances the symlinks get b0rked or replaced with the original SSH files, the keys get out of sync, and one machine in the cluster can’t talk to another. The really irritating thing about this is that the tools meant to fix it (pvecm updatecerts) don’t work. I’ve got an elaborate set of procedures to gather the certs from the hosts and fix the files when it breaks, but it sux bad enough that I’ve got two clusters I’m putting off fixing.

        Corosync is the cluster. It’s a shared file system that immediately replicates any changes to all members. That’s essentially anything under /etc/pve/. Corosync is very sensitive. I believe they ask for 10ms lag or less between hosts, so it can’t work over a WAN connection. Shit like VM restores or vmotion between hosts can flood it out. Looks fukin awful when it goes down. Your whole cluster goes kaput.

        All corosync does is push around this set of config files, so a dedicated NIC is overkill, but in busy environments, you might wind up resorting to that. You can put cororsync on its own network, but you obviously need a network for that. And you can establish throttles on various types of host file transfer activities, but that’s a balancing act that I’ve only gotten right in our colos where we only have 1gb networks. I have my systems provisioned on a dedicated corosync vlan and also use a secondary IP on a different physical interface, but corosync is too dumb to fall back to the secondary if the primary is still “up”, regardless of whether its actually communicating, so I get calls on my day off about “the cluster is down!!!1” when people restore backups.

        • tmjaea@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          Thanks for your answer.

          I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.

          In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.

  • non_burglar@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    20 hours ago

    Don’t use Proxmox, use incus. It’s way easier to run and doesn’t give a care about your storage.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 hours ago

        Like I said, incus don’t care about your storage.

        I’ve never uses PBS, I’ve always just rolled my own. I currently keep 7 daily, 4 weekly and 4 monthly. My data mounts are all nfsv4.

        Edit: isnt it possible to use pbs with non-proxmox systems?

        • MangoPenguin@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 hours ago

          Yeah it sounds nice but too much time investment for me.

          I can install PBS client on any system but it requires manual setup and scheduling which I don’t want to do. When used with Proxmox that’s all handled for me.

          Also I don’t think Proxmox cares about storage either, I just use ZFS which is completely standard under the hood.

  • notfromhere@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    20 hours ago

    I’m running Proxmox and hate it. I still recommend it for what you are trying to do. I think it would work quite nicely. Three of my four nodes have llama.cpp VMs hosting OpenAI-compatible LLM endpoints (llama-server) and I run Claude Code against that using a simple translation proxy.

    Proxmox is very opinionated on certain aspects and I much prefer bare metal k8s for my needs.

  • SaintWacko@slrpnk.net
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    22 hours ago

    I will always recommend Proxmox, not just because it’s really easy to add more stuff, but because it’s really safe to tinker with. You take a snapshot, start messing around, and if you break something you just revert to the snapshot

    • OnfireNFS@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      22 hours ago

      This. Even if you were going to run a bare metal server it’s almost always nicer to install Proxmox and just have a single VM

  • FiduciaryOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    20 hours ago

    I like ProxMox too, I’m quite happy that I dove in with it. Just one word of warning - if you mount a drive volume in a container, destroy the container and restore it from a backup, it wipes out the mounted drive. I, uh, lost a bunch of data that way. Not super important data, but still.

    I’m still glad I went with ProxMox though. It makes spinning up something a breeze, and I also went with HA in a VM, and another Debian VM for Docker, and a bunch of random LXCs.

    • frongt@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      If you can replicate it, you should really file a bug report so that the next guy doesn’t lose data.

      • FiduciaryOne@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Yeah, not a bind mount. There was a warning, but I was restoring a ton of LXCs and clicked through the warning too fast. My fault, I’m not super sore about it, just warning others as a service to prevent what happened to me!

  • JeanValjean@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    23 hours ago

    From an earlier post I made much like yours, I decided to go with incus. I’d be fully migrated if real life hadn’t kicked me in the taint for a few weeks.

  • suicidaleggroll@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    1 day ago

    In my opinion, Proxmox is worth it for two reasons:

    1. Easy high-availability setup and control

    2. Proxmox Backup Server

    Those two are what drove me to switch from KVM, and I don’t regret it at all. PBS truly is a fantastic piece of software.