• 0 Posts
  • 197 Comments
Joined 10 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • Others have mentioned SFF desktops.

    My current server is an old Dell Optiplex SFF desktop. Idles at just under 20w, peaks at 80. Currently has an NVME boot drive, and an 8TB 3.5" drive.

    Runs like a champ, easily serves Jellyfin video, with transcoding, while converting videos with handbrake (and with 2 other systems converting videos off that drive over the net).

    Cost, internal space, options and power it’s hard to beat an SFF. If you don’t need internal space or conversion power, than a NUC can work (the lack of sufficient cooling limits it’s converting capabilities).







  • I sync hundreds of gigs, (if not terabytes at this point) using Syncthing with errors on only one machine (it’s running on 6 devices, including a VM). And those errors are of my own doing, not random Syncthing errors.

    It’s surprisingly robust these days, especially for a single-user notes.

    I have an indexing job that runs on my server every 30 minutes, saving into a text file (it indexes my media folder, which is about 3TB of movies and TV shows).

    Those text files sync to my phone when they’ve changed (so every 30 minutes). They’re always up to date when I open them.

    My phone also has jobs to continually sync my photos to home, an ad-hoc folder to my laptop, and about 25 other folder pairs (including NeoBackup) that sync under different conditions, without fail.

    I’m currently testing Cherrytree using Sourcherry on Android and it seems to work fine as a single-user solution with Syncthing.



  • Others have clarified, but I’d like to add that security isn’t one thing - it’s done in layers so each layer protects from potential failures in another layer.

    This is called The Swiss Cheese Model. of risk mitigation.

    If you take a bunch of random slices of Swiss cheese and stack them up, how likely is there to be a single hole that goes though every layer?

    Using more layers reduces the risk of “hole alignment”.

    Here’s an example model:

    So a router that has no open ports, then a mesh VPN (wireguard/Tailscale) to access different services.

    That VPN should have rules that only specific ports may be connected to specific hosts.

    Hosts are on an isolated network (could be VLANS), with only specific ports permitted into the VLAN via the VPN (service dependent).

    Each service and host should use unique names for admin/root, with complex passwords, and preferably 2FA (or in the case of SSH, certs).

    Admin/root access should be limited to local devices, and if you want to get really restrictive, specific devices.

    In the Enterprise it’s not unusual to have an admin password management system that you have to request an admin password for a specific system, for a specific period of time (which is delivered via a secure mechanism, sometimes in person). This is logged, and when the requested time frame expires the password is changed.

    Everyone’s risk model and Swiss cheese layering will fall somewhere on this scale.


  • About 5 years ago I opened a port to run a test.

    Within hours it was getting hammered (probably by scripts) trying to figure out what that port was forwarded to, and trying to connect.

    I closed the port about a week later, but not before that poor consumer router was overwhelmed with the hits.

    I closed the port after a week. For the next 2 years I’d get hammered with scans occasionally.

    There are tools out there continually looking for open ports, they probably get added to a database and hackers/script kiddies, whoever, will try to get in.

    Whats interesting is I did the same thing around 2000 with a DSL connection (which was very much a static address) and it wasn’t an issue even though there were fewer always-on consume connections.




  • Others have mentioned Syncthing as a sync solution. I’d like to add a couple points:

    Syncthing can work fine even for solutions that are intended to use their own sync, provided it’s a single-user setup. You’re not likely to make simultaneous changes on 2 devices, so collisions are unlikely.

    Also for using Syncthing, I recommend Syncthing-Fork for Android - it moves sync conditions into the folder/job rather than global. Very useful when you have jobs you want always syncing, and jobs you want to only sync on wifi and power.

    If using iOS there’s an ST client called Möbius Sync ($5), developed by a company the financially supports Syncthing.

    For Windows, get SyncTrayzor - it makes running and managing ST easier.





  • So don’t expose it to the internet - which should be the default stance for anything.

    The internet was (mistakenly and intentionally) built without security - that doesn’t mean we should just accept that, but instead build everything with our own security.

    Numerous mesh VPN solutions exist: Hamachi has been around since at leas 2006. NeoRouter since at least 2012. Then we have Wireguard and Tailscale, and others.

    Business build their own tunnels between locations, using routers/gateways with that capability. Consumer routers from Linksys could do this in 2006.

    There’s zero excuse for running anything exposed to the internet.

    In closing NO SOFTWARE is free of bugs. With Plex you get to pay for those bugs and still have software that depends on a connection even though you’re hosting and viewing your own media, locally.

    You wanna denigrate Jellyfin, at least be honest about the pros/cons between the different solutions.



  • What are you trying to guard against with backups? It sounds like your greatest concern is data loss from hardware failure.

    The 3-2-1 approach exists because it addresses the different concerns about data loss: hardware failures, accidental deletion, physical disaster.

    That drive in your safe isn’t a good backup - drives fail just as often when offline as online (I believe they fail more often when powered off, but I don’t have data to support that). That safe isn’t waterproof, and it’s fire resistance is designed to protect paper, not hard drives.

    If this data is important enough to back up, then it’s worth having an off site copy of your backup. Backblaze is one way, but there are a number of cloud based storages that will work (Hetznet, etc).

    As to your Windows/Linux concern, just have a consistent data storage location, treat that location as authoritative, and perform backups from there. For example - I have a server, a NAS, and an always-on external drive as part of my data duplication. The server is authoritative, laptops and phones continuously sync to it via Syncthing or Resilio Sync, and it duplicates to the NAS and external drives on a schedule. I never touch the NAS or external drives. The server also has a cloud backup.