• 0 Posts
  • 179 Comments
Joined 9 months ago
cake
Cake day: January 2nd, 2025

help-circle
  • What are you trying to guard against with backups? It sounds like your greatest concern is data loss from hardware failure.

    The 3-2-1 approach exists because it addresses the different concerns about data loss: hardware failures, accidental deletion, physical disaster.

    That drive in your safe isn’t a good backup - drives fail just as often when offline as online (I believe they fail more often when powered off, but I don’t have data to support that). That safe isn’t waterproof, and it’s fire resistance is designed to protect paper, not hard drives.

    If this data is important enough to back up, then it’s worth having an off site copy of your backup. Backblaze is one way, but there are a number of cloud based storages that will work (Hetznet, etc).

    As to your Windows/Linux concern, just have a consistent data storage location, treat that location as authoritative, and perform backups from there. For example - I have a server, a NAS, and an always-on external drive as part of my data duplication. The server is authoritative, laptops and phones continuously sync to it via Syncthing or Resilio Sync, and it duplicates to the NAS and external drives on a schedule. I never touch the NAS or external drives. The server also has a cloud backup.





  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldPi NAS for multi-location backups
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    15 days ago

    Sync is not backup.

    Let’s repeat that - sync is not backup.

    If your sync job syncs an unintentional deletion, the file is deleted, everywhere.

    Backup stores versions of files based on the definitions you provide. A common backup schedule for a home system mat be monthly full, Daily incremental. In this way you have multiple versions of any file that’s changed.

    With sync you only have replicants of one file that can be lost through the sync.

    Now, you could use backup software to a given location, and have that synchronized to remote systems. Syncthing could do this, with the additional safety of “send only” configured, so if a remote destination gets corrupted, it won’t sync back to the source.

    Edit: as for Pi NAS, I’ve found Small-Form-Factor desktops to be a better value. They don’t have much physical space for drives, but I’ve been able to use two 3.5" drives or four 2.5" drives in one. My current one idles at <15w.

    Or mini pc with one drive. Since you’re replicating this data to multiple locations, having local redundancy (e.g. Mirroring) isn’t really necessary.

    Of course this assumes your net backup requirements are under about 12TB (or whatever the latest single drive size is).


  • Sure I can.

    You’re complaining about needing 4gb of RAM on a virtualized platform in 2025, when 4gb of ram was common on a laptop (which is heavily space constrained) thirteen years ago.

    It’s a fair comparison.

    When I spin up a VM for Linux, it’s 4gb - that’s the minimum today, because the virtualization platform will over-commit ram as it knows how to best utilize it.

    I can run a Linux box in 2gb, but as soon as I start doing anything with it, more ram will be required.





  • As others have said, sync isn’t backup.

    It may be part of a backup plan, however.

    I use Syncthing on my mobile devices to keep data created on the devices synchronized to my server at home. Things like photos sync to home over any connection, while I sync other stuff only over wifi. Syncthing-Fork allows you to set these conditions on a per-folder-pair basis.

    That server becomes my authoritative box for any data. All that data is then mirrored on a schedule to 2 other systems at home (a NAS and a large drive on another box).

    The main server also has a cloud backup which runs continuously.

    So I have 3 local copies of data to recover from if I have a hardware failure, and a cloud backup.

    I find tools like Syncthing and Resilio are good for synchronization, especially mobile devices. But between full-pc-OS devices, I just use native tools (scripts and schedules) because I don’t want synchronization, but specific patterns of copying/mirroring, etc.

    I do use Resilio for ad-hoc access to almost any file on my server, since it’s Conditional Sync feature permits me to connect with a mobile device from anywhere and sync only the selected files. So I can grab a movie or TV show, Resilio will sync it and I can watch it once the sync is complete.








  • As others have said, it’s probably overheating.

    That’s a mini, and likely doesn’t have any fans at all (or something perfunctory), so probably won’t handle being run at high cpu for more than a few minutes.

    I currently have a small-form-factor pc with the same issue - drive and general box temps were high (drive was 110f, continuous, within range but on the edge). It would randomly reboot.

    Replacing the paste on the cpu cooler helped a lot (no more random reboots), but adding a compressor-type fan dropped box temps (and more importantly drive temps), down to room temp.

    I think the best you may be able to do is add an external compressor fan with some duct tape.


  • Onomatopoeia@lemmy.cafetoSelfhosted@lemmy.worldDNS server
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    Ah, unbound has the root DNS servers hard coded. That’s a significant point.

    Any reason you couldn’t do the same with any other DNS server such as PiHole?

    I’m really trying to understand why I’d run two DNS servers in serial, instead of one. All this sounds like it’s just a different config that (in the case of unbound) has been built in - is there something else I’m missing that unbound does differently?

    Why couldn’t you just config the TLD’s as your upstream DNS in whatever local DNS server? Isn’t that what enterprises do?