• 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: September 1st, 2023

help-circle


  • Yeah, the tablet runs Fully Kiosk and I tried the same thing with the battery percentage thing and ran into the same issue, so I just simplified and made the automation time-based.

    The tablet also likes to freeze a few times a day, so I also created an automation that toggles the smart plug power whenever HA loses connection to the tablet for more than 5 seconds, then toggles back to the original state at the start of the automation, which corrects the problem. Until the next time. But hey! It was only $60, so it’s fine.




  • From top to bottom:

    • Patch panel (with artisinal, handmade cables)
    • TP-Link managed switch Shelf 1:
    • PFSense 4 port firewall
    • Lenovo m910q w/Proxmox (cluster node 1) running 2 VMs for docker hosting: Ubuntu for media stuff (arrs, navidrome, jellyfin, calibre, calibre-web, tubesync, syncthing) and Debian for other stuff (paperless-ngx, vikunja, vscodium, redlib, x-pipe webtop, fasten health, linkwarden, alexandrite), 1 Win 10 VM for the very few times I need to use windows, some Red Hat Academy student and instructor RHEL 9 VMs, and an OPNsense VM for testing Shelf 2:
    • HP Elitedesk G5 800 SFF w/Proxmox (cluster node 2) with an Nvidia GT 730 passed through to a Debian VM used primarily as a remote desktop via ThinLinc, but also runs a few docker containers (stirling pdf, willow application server, fileflows)
    • Shuttle DH110 w/Proxmox (cluster node 3) with 1 VM running Home Assistant OS with an NVME Coral TPU passed through as well as a zooz 800 long range zwave coordinator (the zigbee coordinator is ethernet and in a different room) and two LXCs with grafana and prometheus courtesy of tteck (RIP) Shelf 3:
    • WIP Fractal R5 server to replace the ancient Ubuntu file server to the left (outside the rack, sitting on the box of ethernet cable) that is primarily the home of my media drives (3 12 TB Ironwolf drives) and was my first homelab server. The new box will have a Tesla p4 and RX 580 GTX, i7-8700T and 64GB RAM in addition to the drives from the old server. I’ll be converting the Ubuntu drive from the old server into an image and will use it to create a Proxmox VM on the new server, with the same drives passed through. Bottom:
    • 2 Cyberpower CP1000 UPS with upgraded LiFePO4 batteries. The one on the left is only for servers and only exists to give the servers time to shut down cleanly when the power goes out. The one on the right is only for network devices (firewall, switch and the Ruckus R500 out of shot mounted higher in the closet)



  • I have 4 ethernet cameras feeding into Frigate inside HAOS. HAOS is running in a Proxmox VM with 4 cores, 4GB RAM, 128GB storage and an m.2 Coral TPU passed through.

    The host machine is a Lenovo m910q with an i7-6700T processor that pulls about 35w, 32GB RAM and 1 TB NVMe.

    Frigate is set to retain clips for 5 days, after which they are deleted. I have a Samba Backup job that runs every night and retains 10 days of backups.

    With this setup, disk space never exceeds 50%, and CPU usage never exceeds 35%.



  • I use several separate small servers in a Proxmox cluster. You can get a used Dell or HP SFF PC from eBay for cheap (example). The ones I am using all came with Intel T series processors that run at 35w.

    You install Proxmox like any other OS (it’s basically Debian), then you can create VMs (or LXCs) to run whatever services you want.

    If you have existing drives in a media server, you can pass those drives through to a VM pretty easily, or any PCI device, or even the entire PCI controller.





  • Is there a window in the room the closet is in? I’ve got a similar setup with a server rack in a closet (no ventilation, though). I recently purchased an in-window Midea AC that can be controlled by Home Assistant.

    I have an automation that will kick on the AC if the temperature in the closet rises above a certain amount, and will shut down when it drops below that amount. I just leave the closet door open by about a foot and that seems to be sufficient.

    It’s probably worth noting that I’m running pretty efficient hardware (35w i7s and a 75w Tesla P4) so it doesn’t get super hot, even under heavy load.


  • I’ve been daily driving a Debian 11 Proxmox VM running on an HP ProDesk Elite SFF with an i7-6700T and an ancient Nvidia GeForce GT 730 passed through.

    I access it via ThinLinc running on a Dell Wyse 5070 Extended thin client. Works really well, even video isn’t bad, but it’s not for gaming.

    For gaming, I’m working on setting up a Nobara VM with an Nvidia Tesla P4 passed through.




  • Just an FYI to OP: If you’re looking to run docker containers, you should know that Proxmox specifically does NOT support running docker in an LXC, as there is a very good chance that stuff will break when you upgrade. You should really only run docker containers in VMs with Proxmox.

    Proxmox Staff:

    Just for completeness sake - We don’t recommend running docker inside of a container (precisely because it causes issues upon upgrades of Kernel, LXC, Storage packages) - I would install docker inside of a Qemu VM as this has fewer interaction with the host system and is known to run far more stable.