• 0 Posts
  • 16 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • The only true “roadblock” I have experienced was when running on the raspberry pi, where the CPU was too slow to do any transcoding at all, and the memory was too small and unupgradable to be able to run much at the same time.

    As soon as I had migrated to a proper desktop (the i7-920) I could run basically everything I would regularly want. And from then on it was a piece of cake upgrading. Shut the machine down, unplug, swap the parts, plug in, turn on. Linux has happily booted up with no trouble with the new hardware.

    Since my first server was a classic bios, and the later machines was UEFI, then that step required a reinstall… But after the reinstall, I actually just copied all the contents of the root partition over, and it just worked.

    The main limiting factors for me has been the amount of memory, the amount of SATA connectors for disks, and whether the hardware supported hardware transcoding.

    For memory, ensure the motherboard has 4 sockets for memory, that makes it easy to start out with a bit of memory and upgrade later. For example you could start out with 2x 4GB sticks for a total of 8GB, and then later when you feel like you need more, you buy 2x 8GB sticks. Now you have a total of 24 GB.

    For SATA ports, ensure the motherboard has enough ports for your needs, and I would also strongly recommend looking for a motherboard with at least 2 PCIe 16x slots, as that will allow you too add many more SATA or SAS ports via a SAS card.

    Hardware encoding is far from a must. It’s only really necessary if you have a lot of media in unsupported formats by the client devices. 95% of my library is h.264 in 1080p, which is supported on pretty much everything, so it will play directly and not require any transcoding. Most 1080p media is encoded in h.264, so it’s usually a non-issue. 4k media however often come in HEVC (h.265), which many devices do not support. These files will require transcoding to be playable on devices that do not support it, but a CPU can still transcode it using “software transcoding”, it’s just much slower and less responsive. So I would consider it a nice convenience, but definitely not a must, and it depends entirely on the encoding of the media library.

    EDIT: Oh, I just remembered… Beware of non-standard hardware. For example motherboards from Dell and IBM/Lenovo. These often come with non-standard fan mounts and headers, which means you can’t replace the fans. They also often have non-standard power supplies, in non-standard form factors, which means that if the power supply dies, it’s nearly impossibly to replace, and when you upgrade your motherboard you are likely forced to replace the power supply as well, and since the size of the power supply isn’t standard, the new power supply will not fix in the case… Many of their motherboards also have non-standard mounts for the motherboards, which means that you are forced to replace the case when upgrading the motherboard… You can often find companies selling their old workstations for dirt-cheap, which can be a great way to get started, but often these workstations are so non-standard that you practically can’t upgrade them… Often the only standard components in these are harddrives, SSDs, optical disc drives, memory, and any installed PCIe cards.


  • As long as it’s capable of booting into Linux, then you can start building a homelab…

    Initially I had a 2-bay Synology NAS, and a Raspberry Pi 3B… It was very modest, but enough to stream media to my TV and run a bunch of different stuff in docker containers.

    In my house, computer hardware is handed down. I buy something to upgrade my desktop, and whatever falls off that machine is handed down to my wife or my daughter’s machines, then finally it’s handed down to the server.

    At some point my old Core i7-920 ended up in the server. This was plenty to upgrade the server to running Kubernetes with even more stuff, and even software transcoding some media for streaming. Running BTRFS gave me the flexibility to add various used disks over time.

    At some point the CPU went bad, so I bought an upgrade for my desktop, and handed my old CPU donown the can, which released an Intel Core i5-2400F for the server. At this point storage and memory started to become the main limiting factor, so I added a PCI SAS card in IT mode to add more disks.

    As this point my wife needed a faster CPU, so I bought a newer used CPU for her, and her old Intel Core i7-3770 was handed down to the server. That gave quite a boost in raw CPU power.

    I ended up with a spare Intel Core i5-7600 because the first motherboard I bought for my wife was dead, so I looked up and found that for very cheap I could buy a motherboard to match, so I upgraded the server which opened up proper hardware transcoding.

    I have since added 2 Intel NUCs to have a highly available control plane for my cluster.

    This is where my server is at right now, and it’s way beyond sufficient for the media streaming, photo library, various game servers, a lot of self-hosted smart home stuff, and all sorts of other random bits and pieces I want to run.

    My suggestion would be to start out by finding the cheapest possible option, and then learn what your needs are.

    What do you want your server to do? What software do you want to run? What hardware do you want to connect to it? All of this will evolve as you start using your server more and more, and you will learn what you need to buy to achieve what you want to.




  • I really don’t see much benefit to running two clusters.

    I’m also running single clusters with multiple ingress controllers both at home and at work.

    If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.

    There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.

    You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.


  • ZFS doesn’t really support mismatched disks. In OP’s case it would behave as if it was 4x 2TB disks, making 4 TB of raw storage unusable, with 1 disk of parity that would yield 6TB of usable storage. In the future the 2x 2TB disks could be swapped with 4 TB disks, and then ZFS would make use of all the storage, yielding 12 TB of usable storage.

    BTRFS handles mismatched disks just fine, however it’s RAID5 and RAID6 modes are still partially broken. RAID1 works fine, but results in half the storage being used for parity, so this would again yield a total of 6TB usable with the current disks.



  • My home-assistant installation alone is too much for my Raspberry Pi 3. It depends entirely on how much data it’s processing and needing to keep in memory.

    Octoprint needs to respond in a timely manner, so you will want to have the system mostly idle (at least below 60 percent CPU at all times), preferably octoprint should be the only thing running on the system unless it’s rather powerful.

    If I were you, I would install octoprint exclusively on your Raspberry Pi 3, and then buy a Raspberry Pi 4 for the other services.

    I’m running Pi-hole and a wireguard VPN on an old Raspberry Pi 2, which is perfectly fine if you are not expecting gigabit speeds on the VPN.


  • It would be wonderful with something more granular than “NSFW”…

    I would love if we got something even more granular like a "Content Warning: ".

    Examples:

    • Content Warning: nudity - might be a painting with nude people, might be a photo of nude people, in essence if it isn’t porn, but there’s exposed genitals, butts or breasts.
    • Content Warning: porn - you can probably guess…
    • Content Warning: gore - images with gore, people missing body parts, often dead as well.
    • Content Warning: death - images with people dying, but without gore.
    • Content Warning: blood - images with some blood, but no death or gore. (often seen in news articles)
    • Content Warning: violence - people fighting, but without turning bloody.

    These could of course be expanded with many more categories if need be.

    EDIT: added violence by request




  • The reason a VPN is better to expose than SSH, is the feedback.

    If someone tries connecting to your SSH with the wrong key or password, they get a nice and clear permission denied. They now know that you have SSH, and which version. Which might allow them to find a vulnerability.

    If someone connects to your wireguard with the wrong key, they get zero response. Exactly as if the port had not been open in the first place. They have no additional information, and they don’t even know that the port was even open.

    Try running your public IP through shodan.io, and see what ports and services are discovered.




  • I use Promtail+Loki+Grafana on my home server, which is decently performant, light on resources and storage, and searchable. It takes a little effort to learn the LogQL query language, but it’s very expressive.

    I’m running it on Kubernetes, but it should be pretty straightforward to configure for running on plain Docker.