• 0 Posts
  • 15 Comments
Joined 1 year ago
cake
Cake day: December 14th, 2023

help-circle
  • Partially yes, the tricky thing is that when using network_mode: "service:tailscale" (presumably on the caddy container since that’s what needs to receive traffic from the tailscale network), you won’t be able to attach the caddy container to any networks since it’s using the tailscale network stack. This means that in order for caddy to reach your containers, you will need to add the tailscale container itself to the relevant networks. Any attached containers will be connected as well.

    (Not sure if I misread the first time or if you edited but the way you say it is right, add the tailscale container to the proxy network so that caddy will also be added and can reach the containers)

    Here’s the super condensed version of what matters for connecting traefik/caddy to a VPN like wireguard/tailscale.

    • I left out all WG config since presumably you know how to configure tailscale
    • Left out acme / letsencrypt stuff since that would be different on caddy anyway
    • You may need to configure caddy to trust the tailscale tunnel IP of the machine on the other end that will be reverse proxying over the tunnel.
    • Traefik I guess requires you to specify the docker network to use to reach stuff, I just put anything that should be accessible into “ingress” as you can see. I’m not sure if my setup supports using a different proxy network per app but maybe caddy allows that.

    My traefik compose:

    services:
      wireguard:
        container_name: wireguard
        networks:
          - ingress
    
      traefik:
        network_mode: "service:wireguard"
        depends_on:
          - wireguard
        command:
          - "--entryPoints.web.proxyProtocol.trustedIPs=10.13.13.1" # Trust remote tunnel IP, the WG container is 10.13.13.2
          - "--entrypoints.websecure.address=:443"
          - "--entryPoints.websecure.proxyProtocol.trustedIPs=10.13.13.1"
          - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
          - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
          - "--entrypoints.web.http.redirections.entrypoint.priority=100"
          - "--providers.docker.exposedByDefault=false"
          - "--providers.docker.network=ingress"
    
    networks:
      ingress:
        external: true
    
    

    And then in a service’s docker-compose:

    services:
      ui:
        image: myapp
        read_only: true
        restart: always
        labels:
          - "traefik.enable=true"
          - "traefik.http.routers.myapp.rule=Host(`xxxx.xxxx.xxxx`)"
          - "traefik.http.services.myapp.loadbalancer.server.port=80"
          - "traefik.http.routers.myapp.entrypoints=websecure"
          - "traefik.http.routers.myapp.tls.certresolver=mytlschallenge"
        networks:
          - ingress
    
    networks:
      ingress:
        external: true
    
    

    (edited to fix formatting on mobile)


  • I’ve done something similar but I’m not sure how helpful my example would be because I use wireguard instead of tailscale and traefik instead of caddy.

    The principle is the same though, iirc I have my traefik container set to network_mode: “service:wireguard” so that the traefik container uses the wireguard container’s network stack. That way the traefik container also sees the wireguard interface and can receive traffic going to the wireguard IP. Then at the other end of the wireguard tunnel I can use haproxy to pass traffic to the wireguard IP through the tunnel and it automatically hits traefik.




  • Immich has a setting that does automatic photo backup over WiFi, I use the android app as a Google photos replacement. You can choose however many folders on your phone as you want (I just do camera roll) and enable only backup over WiFi and it backs up all the photos in original quality. I self-host the server on my Synology with a reverse proxy (can’t forward ports at my current place due to cgnat) so I can access it from anywhere.

    I believe the app is cross platform so the iPhone version should be identical to the android one.


  • Woah federation would be huge!

    Someday I would love to be able to share and receive shared photos / albums to and from users on different servers. Especially if it lets me sync the original files so that I can keep a copy in case their server goes down. It would also be neat if you could enable activitypub so that your account could show up as a fediverse user that people can follow for public or approved follower only posts, pixelfed compatibility would be super cool.


  • Keep in mind that if you set up raid using zfs or btrfs (idk how it works with other systems but that’s what I’ve used) then you also get scrubs which detect and fix bit rot and unrecoverable read errors. Without that or a similar system, those errors will go undetected and your backup system will backup those corrupted files as well.

    Personally one of the main reasons I used zfs and now btrfs with redundancy is to protect irreplaceable files (family memories and stuff) from those kinds of errors, as I used to just keep stuff on a hard drive until I discovered loads of my irreplaceable vacation photos to be corrupted, including the backups which backed up the corruption.

    If your files can be reacquired, then I don’t think it’s a big deal. But if they aren’t, then I think having scrubs or integrity checks with redundancy so that issues can be repaired, as well as backups with snapshots to prevent errors or mistakes from messing up your backups, is a necessity. But it just depends on how much you value your files.


  • My primary use case is safeguarding my important personal artifacts (family photos, digitized paperwork, encryption key / account recovery / 2FA backups) against drive failure (~2TB), followed by my decently sized Plex server (23TB), immich, nextcloud, and various other small things like selfhosted bitwarden, grocy, ollama, and stuff like that.

    I run all of my stuff off of a 6 bay Synology (more drives helps with capacity efficiency as double redundancy with 6 drives costs you 30% and I wanted to be protected against drive failures during rebuilding) with an Intel nuc on top to run plex/jellyfin transcoding using quicksync instead of loading the poor nas with cpu transcoding, I also run ollama on the nuc since it has faster cores than the nas.



  • I’ve done a backup swap with friends a couple times. Security wasn’t much of a worry since we connected to each other’s boxes over ssh or wireguard or similar and used tools that allowed encryption. The biggest challenge for us was that in my selfhosting friend group we all prefer different protocols so we had to figure out what each of us wanted to use to connect and access filesystems and set that up. The second challenge was ensuring uptime and that the remote access we set up for each other stayed up - and that’s what killed the project as we all eventually stopped maintaining the remote access and nobody seemed to care - so if I were to do it again I would make sure all participants have alerts monitoring their shared endpoint.



  • Isn’t Miracast for sending video data? The thing I like about Chromecast is that the phone or remote app just tells the Chromecast where to load the media directly from, and then only sends playback control commands. That makes it a lot lighter resource wise because you don’t need to proxy the stream through a device like a phone that wants to go to sleep to save battery.



  • BakedCatboy@lemmy.mltoSelfhosted@lemmy.worldNAS/Media Server Build Recommendations
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I went with the DS1621xs+, the main driving factors being:

    • that I already had a 6 drive raidz2 array in truenas and wanted to keep the same configuration
    • I also wanted to have ECC, which while maybe not necessary, the most valuable thing I store is family photos which I want to do everything within my budget to protect.

    If I remember correctly only the 1621xs+ met those requirements, though if I was willing to go without ECC (which requires going with xeon) then the DS620slim would have given me 6 bays and integrated graphics which includes quicksync and would have allowed me to do power efficient transcoding and thus running Plex/jf right on the nas. So there’s tradeoffs, but I tend to lean towards overkill.

    If you know what level of redundancy you want and how many drives you want to be running considering how much the drives will cost, whether you want an extra level of redundancy while rebuilds are happening after 1 failure, how much space is sacrificed to parity, then that’s a good way to narrow down off the shelf nases if you go that way. Newegg’s NAS builder comes in handy if you just select “All” capacities and then use the nas filters by number of drive bays, then you can compare whats left.

    And since the 1621xs+ has a pretty powerful xeon, I run most things on the nas itself. Synology supports docker and docker compose out of the box (once the container app is installed), so I just ssh into the box and keep my compose folders somewhere in the btrfs volume. Docker nicely allows anything to be run without worrying about dependencies being available on the host OS, the only gotcha is kernel stuff since docker containers share the host kernel - for example wire guard which relies on kernel support I could only get to work using a user space wire guard docker container (using boringtun) and after the VPN/tail scale app is installed (presumably because that adds tap/tun interfaces that’s needed for vpn containers to work.

    Only jellyfin/Plex is on my NUC. On the nas I run:

    • Adguard

    • Sonarr/radarr/lidarr/prowlarr/transmission/overseerr

    • Castblock

    • Grocy

    • Nextcloud

    • A few nginx instances for websites

    • Uptime-kuma

    • Vaultwarden

    • Traefik and wire guard which connects to a vps as a reverse proxy for anything that needs to be accessible from the public internet


  • Just want to second this - I use an Intel nuc10i7 that has quicksync for Plex/jellyfin, can transcode at least 8 streams simultaneously without breaking a sweat, probably more if you don’t have 4K, and a separate synology nas that mainly handles storage. I run docker containers on both and the nuc has my media mounted using a network share via a dedicated direct gigabit Ethernet connecting the two so I can keep all the filesystem access traffic off of my switch /LAN.

    This strategy was to be able to pick the best nas based on my redundancy needs (raidz2 / btrfs with double redundancy for my irreplaceable personal family memories) while being able to get a cost effective low power quicksync device for transcoding my media collection, which is the strategy I chose over pre-transcoding or keeping multiple qualities in order to save HDD space and be flexible to the low bandwidth requirements of whoever I share with who has a slow connection.