• 0 Posts
  • 42 Comments
Joined 3 years ago
cake
Cake day: June 22nd, 2023

help-circle


  • You’re talking high availability design. As someone else said, there’s almost always a single point of failure but there are ways to mitigate depending on the failures you want to protect against and how much tolerance you have for recovery time. instant/transparent recovery IS possible, you just have to think through your failure and recovery tree.

    proxy failures are kinda the simplest to handle if you’re assuming all the backends for storage/compute/network connectivity is out of scope. You set up two (or more) separate VMs that have the same configuration and float a virtual IP between them that your port forwards connect to. If any VM goes down, the VIP migrates to whatever VM is still up and your clients never know the difference. Look up Keepalived, that’s the standard way to do it on Linux.

    But you then start down a rabbit hole. Is your storage redundant, the networking connectivity redundant, power? All of those can be made redundant too, but it will cost you, time and likely money for hardware. It’s all doable, you just have to decide how much it’s worth for you.

    Most home labbers I suspect will just accept the 5mins it takes to reboot a VM and call it a day. Short downtime is easier handle, but there are definitely ways to make your home setup fully redundant and highly available. At least unless a meteor hits your house anyway.




  • Decipher0771@lemmy.catoSelfhosted@lemmy.worldemergency remote access
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    5 months ago

    I buy better gear that doesn’t regularly require a reboot

    My mikrotik has not NEEDED a reboot ever, except when I run upgrades. Everything is set up to auto recover when disconnects happen, and power up properly if there’s an extended power failure that causes UPS shutdowns.

    I will never understand why people think rebooting their router regularly is a normal thing. That just means your gear or setup is crap.







  • I did (am doing) something very similar. I definitely have issues with my indexing, but I’m just ordering it manually by year/date for now.

    I’m doing a little extra for parity though. I’m using 50-100gb discs for the data, and using 25gb discs as a full parity disc via dvdisaster for each disc I burn. Hopefully that reduces the risk of the parity data also being unreadable, and gives MORE parity data without eating into my actual data discs. It’s hard enough to break up the archives into 100gb chunks as is.

    Need to look into bacula as suggested by another poster.



  • It’s not a transcoding power issue. It’s a UI consistency and usability issue. With every device having a slightly different UI, with some apps having issues if playing back natively and some needing transcoding, the experience is inconsistent and frankly doesn’t pass the “wife acceptance factor” test, or the “let your friends use it without needing to handhold them through regular troubleshooting for their particular device” test.

    I still don’t use Plex and exclusively use Jellyfin, but it’s still a hard sell to non technical users. Plex has much more polish.





  • I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.

    Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.

    It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.