Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.
Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.
Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.
Data should be volume mounted into the container, and then the host disk can be backed up.
The only app that I’ve had to fight docker on is Seafile, and even that works quite well now.
Excellent, thanks for the update!
Can you make your docker service start after the NFS Mount to rule that out?
A restart policy only takes effect after a container starts successfully. In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which doesn’t start at all from going into a restart loop.
https://docs.docker.com/engine/containers/start-containers-automatically/#restart-policy-details
If your containers are crashing before the 10 timeout, then they won’t restart.
I think its better to keep your gateway basic, and run extra services on a separate raspi or similar. Let your router/gateway focus on routing packets.
Openwrt can run Adguard, and as long as your gateway can run docker, you can probably get pihole working.
For openwrt+wireguard, see: https://cameroncros.github.io/wifi-condom.html
Looks like tailscale should work in openwrt: https://openwrt.org/docs/guide-user/services/vpn/tailscale/start
For the wireguard server, I am using firezone, but they have pivoted to being a tailscale clone, so I am on the legacy version, which is unsupported: https://www.firezone.dev/docs/deploy/docker
Edit: fixed link
That is likely a speed test server within the same data center as your vps, or they have special traffic shaping rules for it.
Try using iperf from your local box to the VPS and see what speeds you get
Never heard that term, but its a very obscure concept, so wouldn’t surprise me if it had multiple names. Probably vender specific names?
Seems quite a few people havent heard of it, hence a lot of the split DNS answers :/
I can’t remember exactly what its called, but something like router NAT loopback is what you want. I’ll have a look around. But if you set it right, things should work properly. It might be a router setting.
Found it: https://community.tp-link.com/en/home/stories/detail/1726
4 cores is a bit limiting, but definitely depends on the usage. I only have 1 VM on my NUC, everything else is docker.
I thought all the core processors had VT* extensions, I was using virtualization on my first gen i7. They are very old an inefficient now though.
I5 3470 is old, but its not that bad. Lots of people are homelabing on NUCs which are only very slightly faster. Performance per Watt will be terrible though. (I am on an i7-10710u, and I’ve yet to run out of steam so far - https://cpu.userbenchmark.com/Compare/Intel-Core-i7-10710U-vs-Intel-Core-i5-3470/m900004vs2771 )
It has VTx/VTd, so should be okay for proxmox, what makes you think it won’t work well?
I think they just advertised how trivial it would be to take their website down…
Homeassistant is another option. Host the server and run the app on your phone. Its not very granular though, and the user interface is not great
Here in Aus, this is how the NBN is provided in some areas, there is a NBN coax-to-ethernet box, and then you can plug in your own router.
There is always a chance that your ISP is doing something weird that prevents that working, but I think it should be fine.
Its not, but if the value of the data is low, its good enough. There is no point backing up linux isos, but family photos definitely should be properly backed up according to 3-2-1.
It depends on the value of the data. Can you afford to replace them? Is there anything priceless on there (family photos etc)? Will the time to replace them be worth it?
If its not super critical, raid might be good enough, as long as you have some redundancy. Otherwise, categorizing your data into critical/non-critical and back it up the critical stuff first?
Sorry, wasn’t meant to be condescending, you just seem fixated on file size when it sounds like RAM (and/or CPU?) is what you really want to optimise for? I was just pointing out that they arent necessarily correlated to docker image size.
If you really want to cut down your cpu and ram, and are okay with very limited functionality, you could probably write your own webserver to serve static files? Plain http is not hard. But you’d want to steer clear of python and node, as they drag in the whole interpreter overhead.
RAM is not the same as storage, that 50mb docker image isn’t going to require 50mb of ram to run. But don’t let me hold you back from your crusade :D
Container overhead is near zero. They are not virtualized or anything like that, they are just processes on your host system that are isolated. Its functionally not much more different to chroot.