Soulseek, among others. Putting my ~400 GB classical music collection out there.
Soulseek, among others. Putting my ~400 GB classical music collection out there.
I think you already have a kill-switch (of sorts) in place with the two Wireguard container setup, since your clients lose internet access (except to the local network, since there’s a separate route for that on the Wireguard “server” container") if any of the following happens:
wg-quick down wg0
inside the container)I can’t be 100% sure, because I’m not a networking expert, but this seems like enough of a “kill-switch” to me. I’m not sure what you mean by leveraging the restart. One of the things that I found annoying about the Gluetun approach is that I would have to restart every container that depends on its network stack if Gluetun itself got restarted/updated.
But anyway, I went ahead and messed around on a VPS with the Wireguard+Gluetun approach and I got it working. I am using the latest versions of The Linuxserver.io Wireguard container and Gluetun at the time of writing. There are two things missing in the Gluetun firewall configuration you posted:
MASQUERADE
rule on the tunnel, meaning the tun0
interface.FORWARD
packets (filter table) by default. You’ll have to change that chain rule to ACCEPT
. Again, I’m not a networking expert, so I’m not sure whether or not this compromises the kill-switch in any way, at least in any way that’s relevant to the desired setup/behavior. You could potentially set a more restrictive rule to only allow traffic coming in from <wireguard_container_IP>
, but I’ll leave that up to you. You’ll also need to figure out the best way to persist the rules through container restarts.First, here’s the docker compose setup I used:
networks:
wghomenet:
name: wghomenet
ipam:
config:
- subnet: 172.22.0.0/24
gateway: 172.22.0.1
services:
gluetun:
image: qmcgaw/gluetun
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
volumes:
- ./config:/gluetun
environment:
- VPN_SERVICE_PROVIDER=<your stuff here>
- VPN_TYPE=wireguard
# - WIREGUARD_PRIVATE_KEY=<your stuff here>
# - WIREGUARD_PRESHARED_KEY=<your stuff here>
# - WIREGUARD_ADDRESSES=<your stuff here>
# - SERVER_COUNTRIES=<your stuff here>
# Timezone for accurate log times
- TZ= <your stuff here>
# Server list updater
# See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list
- UPDATER_PERIOD=24h
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
networks:
wghomenet:
ipv4_address: 172.22.0.101
wireguard-server:
image: lscr.io/linuxserver/wireguard
container_name: wireguard-server
cap_add:
- NET_ADMIN
environment:
- PUID=1000
- PGID=1001
- TZ=<your stuff here>
- INTERNAL_SUBNET=10.13.13.0
- PEERS=chromebook
volumes:
- ./config/wg-server:/config
- /lib/modules:/lib/modules #optional
restart: always
ports:
- 51820:51820/udp
networks:
wghomenet:
ipv4_address: 172.22.0.5
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
You already have your “server” container properly configured. Now for Gluetun:
I exec into the container docker exec -it gluetun sh
.
Then I set the MASQUERADE rule on the tunnel: iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE
.
And finally, I change the FORWARD chain policy in the filter table to ACCEPT iptables -t filter -P FORWARD ACCEPT
.
Note on the last command: In my case I did iptables-legacy
because all the rules were defined there already (iptables
gives you a warning if that’s the case), but your container’s version may vary. I saw different behavior on the testing container I spun up on the VPS compared to the one I have running on my homelab.
Good luck, and let me know if you run into any issues!
EDIT: The rules look like this afterwards:
Output of iptables-legacy -vL -t filter
:
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- lo any anywhere anywhere
16698 14M ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
1 40 ACCEPT all -- eth0 any anywhere 172.22.0.0/24
# note the ACCEPT policy here
Chain FORWARD (policy ACCEPT 3593 packets, 1681K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
10710 788K ACCEPT all -- any lo anywhere anywhere
13394 1518K ACCEPT all -- any any anywhere anywhere ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- any eth0 dac4b9c06987 172.22.0.0/24
1 176 ACCEPT udp -- any eth0 anywhere connected-by.global-layer.com udp dpt:1637
916 55072 ACCEPT all -- any tun0 anywhere anywhere
And the output of iptables -vL -t nat
:
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_OUTPUT all -- any any anywhere 127.0.0.11
# note the MASQUERADE rule here
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER_POSTROUTING all -- any any anywhere 127.0.0.11
312 18936 MASQUERADE all -- any tun+ anywhere anywhere
Chain DOCKER_OUTPUT (1 references)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- any any anywhere 127.0.0.11 tcp dpt:domain to:127.0.0.11:39905
0 0 DNAT udp -- any any anywhere 127.0.0.11 udp dpt:domain to:127.0.0.11:56734
Chain DOCKER_POSTROUTING (1 references)
pkts bytes target prot opt in out source destination
0 0 SNAT tcp -- any any 127.0.0.11 anywhere tcp spt:39905 to::53
0 0 SNAT udp -- any any 127.0.0.11 anywhere udp spt:56734 to::53
Gluetun likely doesn’t have the proper firewall rules in place to enable this sort of traffic routing, simply because it’s made for another use case (using the container’s network stack directly with network_mode: "service:gluetun"
).
Try to first get this setup working with two vanilla Wireguard containers (instead of Wireguard + gluetun). If it does, you’ll know that your Wireguard “server” container is properly set up. Then replace the second container that’s acting as a VPN client with gluetun and run tcpdump again. You likely need to add a postrouting masquerade rule on the NAT table.
Here’s my own working setup for reference.
Wireguard “server” container:
[Interface]
Address = <address>
ListenPort = 51820
PrivateKey = <privateKey>
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostUp = wg set wg0 fwmark 51820
PostUp = ip -4 route add 0.0.0.0/0 via 172.22.0.101 table 51820
PostUp = ip -4 rule add not fwmark 51820 table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
PostUp = ip route add 192.168.16.0/24 via 172.22.0.1
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del 192.168.16.0/24 via 172.22.0.1
#peer configurations (clients) go here
and the Wireguard VPN client that I route traffic through:
# Based on my VPN provider's configuration + additional firewall rules to route traffic correctly
[Interface]
PrivateKey = <key>
Address = <address>
DNS = 192.168.16.81 # local Adguard
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE #Route traffic coming in from outside the container (host/other container)
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
[Peer]
PublicKey = <key>
AllowedIPs = 0.0.0.0/0
Endpoint = <endpoint_IP>:51820
Note the NAT MASQUERADE
rule.
deleted by creator
I set it up manually using this as a guide. It was a lot of work because I had to adapt it to my use case (not using a VPS), so I couldn’t just follow the guide, but I learned a lot in the process and it works well.
I’ve tried both this and https://github.com/jmorganca/ollama. I liked the latter a lot more; just can’t remember why.
GUI for ollama is a separate project: https://github.com/ollama-webui/ollama-webui
deleted by creator
SWAG is great for overwhelmed Nginx beginners. It comes preconfigured with reasonable defaults and also provides configs for a bunch of popular services: https://github.com/linuxserver/reverse-proxy-confs. Both Bitwarden and Vaultwarden are on there.
Note that this setup assumes that you will run your service (Bitwarden/Vaultwarden) in a Docker container. You can make SWAG work with something that’s running directly on the host, but I’d recommend not starting with that until you’ve fooled around with this container setup a bit and gained a better understanding of how Nginx and reverse proxies in general work.
Lmao even
In response to your update: Try specifying the user that’s supposed to own the mapped directories in the docker compose file. Then make sure the UID and GID you use match an existing user on the new system you are testing the backup on.
First you need to get the id of the user you want to run the container as. For a user called foo
, run id foo
. Note down the UID and GID.
Then in your compose file, modify the db_recipes service definition and set the UID and GID of the user that should own the mapped volumes:
db_recipes:
restart: always
image: postgres:15-alpine
user: "1000:1000" #Replace this with the corresponding UID and GID of your user
volumes:
- ./postgresql:/var/lib/postgresql/data
env_file:
- ./.env
Recreate the container using docker compose up -d
(don’t just restart it; you need to load the new config from the docker compose file). Then inspect the postgresql
directory using ls -l
to check whether it’s actually owned by user with UID 1000 and group with GID 1000. This should solve the issue you are having with that backup program you’re using. It’s probably unable to copy that particular directory because it’s owned by root:root and you’re not running it as root (don’t do that; it would circumvent the real problem rather than help you address it).
Now, when it comes to copying this to another machine, as already mentioned you could use something that preserves permissions like rsync, but for learning purposes I’d intentionally do it manually as you did before to potentially mess things up. On the new machine, repeat this process. First find the UID and GID of the current non-root user (or whatever user you want to run your containers as). Then make sure that UID and GID are set in the compose files. Then inspect the directories to make sure they have the correct ownership. If the compose file isn’t honoring the user flag or if the ownership doesn’t match the UID and GID you set for whatever reason, you can also use chown -R UID:GID ./postgresql
to change ownership (replace UID:GID with the actual IDs), but that might get overwritten if you don’t properly specify it in the compose file as well, so only do it for testing purposes.
Edit: I also highly recommend using CLIs (terminal) instead of the GUI for this sort of thing. In my experience, the GUIs aren’t always designed to give you all the information you need and can actually make things more difficult for you.
As others have already mentioned, you are probably correct that it’s a permission error. You could follow the already posted advice to use tools that maintain permissions like rsync, but fixing this botched backup manually could help you learn how to deal with permissions and that’s a rather fundamental concept that anyone selfhosting would benefit from understanding.
If you decide to do this, I would recommend reading up on the concept of user and group permissions on linux and the commands that allow you to inspect ownership and permissions of directories and files as well as the UID and GID of users. Next step would be to understand how Docker handles permissions for mapped directories. You can get a few pointers from this short explanation by LSIO: https://docs.linuxserver.io/general/understanding-puid-and-pgid. Bear in mind that this is not a Docker standard, but something specific to LSIO Docker images. See also https://docs.docker.com/compose/compose-file/05-services/#long-syntax. This can also be set when using docker run
by using the --user
flag.
Logs can also help pinpoint the cause of the issue. The default docker compose setup in Tandoor’s docs sets up several containers, one of which acts as a database (db_recipes
based on postgres:15-alpine
). Inspect that in real time using docker logs -f db_recipes
to see the exact errors.
If anyone wants to achieve something similar without using Tailscale or with alternative VPN providers, the setup outlined in this LSIO guide is pretty neat: https://www.linuxserver.io/blog/advanced-wireguard-container-routing
Edit: Don’t be intimidated by the word “advanced”. I struggled with this a bit at first (was also adapting it to use at home instead of on a VPS that’s tunneling to home) but I got it working eventually and learned a lot in the process. Willing to assist folks who want to set it up.
Ooooh, good catch. I assumed “it’s been giving me the same message for over an hour” to mean that they’ve been monitoring the logs, not running in interactive mode. O_O
That log entry is unrelated to whatever issues you’re having. That’s what the default docker-compose.yaml
uses for health checks:
healthcheck:
test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
interval: 30s
timeout: 5s
retries: 2
The fact that it returns a 200 probably means that Invidious is properly up and running. Could you elaborate further on what you mean by “setup isn’t completing”? How are you trying to connect to the web UI? Sharing your docker-compose.yaml
might help us debug as well.
Edit: I just noticed that the default compose file has the port bound to localhost:
ports:
- "127.0.0.1:3000:3000"
which means you won’t be able to access it from other machines inside or outside your network. You’d have to change that to - "3000:3000"
to enable access for other machines.
I’ve never heard of NextCloud Cookbook before. Looking at its Github page, it says it’s “mostly for testers” and is unstable, so no point in even considering it for regular use at this point in time. Besides, I’m assuming you’d need to have your own instance of Nextcloud up and running to use it; I don’t use Nextcloud.
As for Grocy and other more mature alternatives (Tandoori also comes to mind), I think I initially went with Mealie because it had the most pleasant UI out of all of them. I liked it and found that it satisfied all of my requirements, so I just kept using it.
One thing I need to publicly expose is my own instance of Mealie. It’s a recipe manager that supports multiple users. I share it with family and friends, but also with more distant acquaintances. I don’t want to have to provide and manage access to my network for each and every one of them.
I run Koreader on a Kobo Libra 2. I just connect to my OPDS catalogue on my Calibre-Web instance. It’s not exactly a sync setup; it just gives me access to my library whenever I need to download something, and that covers my needs. There are several other sync options; check out Koreader’s features here: https://github.com/koreader/koreader/wiki
If you like it and decide you want to it, go through the list of supported devices and see what sort of sync capabilities are available for them (support for Kobo devices seems to be the best/have the most options).
+1 for Joplin. I have a different setup since I don’t use Nextcloud: Run Joplin server in a docker container and back up the volumes mapped to it (as well as those of other containers) with rsync.
Using Nicotine+ on my server. https://github.com/fletchto99/nicotine-plus-docker