The OP didn’t mention Proxmox in their post. I’ve been speaking generally, not about any specific OS. For example, Nvidia’s enterprise offerings include a license to use their “GRID” vGPU tech (and the enabled feature flag in the driver).
The OP didn’t mention Proxmox in their post. I’ve been speaking generally, not about any specific OS. For example, Nvidia’s enterprise offerings include a license to use their “GRID” vGPU tech (and the enabled feature flag in the driver).
Why? Product segmentation I suppose. Last I looked, the Virtio project’s efforts were still work-in-progress. The Arch wiki article corroborates that today. Inconsistent behavior across brands and product lines.
I’ve also wanted to do this for a while, but there were always a few too many barriers to actually spin up the project. Here’s just a brain dump of things I’ve seen recently.
vGPUs continue to be behind a license. But there is now vgpu_unlock.
L1T just showed off PCIe “fabric” from Liqid that can switch physical devices between machines.
Turning VMs on and off isn’t as slick as either of the above, but that is doable today. You’ll just have to build all the switching automation yourself. That could just be a shell script running QEMU/libvirt commands, at a minimum.
On the topic of build times, it took me too long to learn that nixos-rebuild supports remote build workers and targets.
For example, if I am editing on my laptop, want to build on my desktop, and apply the build to my file server, then I’d run…
me@laptop$ nixos-rebuild test \ --flake ~/wherever-it-lives \ --build-host desktop \ --target-host file-server \ --use-remote-sudo
The host names should match the name of the nixosConfiguration output from your flake. If they don’t I think you can specify like,
--target-host .#some-machine
Remote sudo avoids having to SSH as root.
Bonus tip: Having Tailscale on every machine makes this work reliably from anywhere, network speed as the limit.