Mama told me not to come.

She said, that ain’t the way to have fun.

  • 3 Posts
  • 451 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle






  • Yeah, containers are great! It’s really nice knowing exactly which directories to move if I need to rebalance my services onto other hardware or something.

    Most of my services are on my NAS, so I have this setup:

    • /srv/nas/<folder> - everything here is on my RAID, and offsite backups look here (and exclude certain directories to save on cost
    • /home/<user>/containers - my git repo with configs, sans passwords/keys
    • configs w/keys live in my password manager

    Disaster recovery should be as simple as:

    1. Copy my data from backup into /srv/nas
    2. Clone my container repo
    3. Copy env files to their respective locations
    4. Run a script to get things set up

    I use specific container versions, so I should get exactly the same setup.

    I’m going to be reinstalling my NAS soon (boot drive is getting old), so we’ll see how this process works, though I’ll skip step 1 since I’m keeping the drives.


  • That really depends on your use case. I use very little transfer because most of my usage is within my LAN. I set up a DNS server (built in to my router) to resolve my domains to my local servers, and all the TLS happens on my local server, so it never goes out to the VPS. So I only need enough transfer for when I’m outside my house.

    Here’s my setup:

    • VPS - WireGuard and HAProxy - sni-based proxying
    • router - static DNS for local services
    • local servers - TLS trunking and services

    My devices use my network’s DNS, but if that fails, they fall back to some external DNS and route traffic through the VPS.

    VPSs without data caps tend to have worse speeds because they attract people who will use more transfer. I think it’s better to find one with a transfer cap that’s sufficient for your needs, so things stay fast. I use Hetzner, which has generous caps in the EU (20TB across the board) and good enough for me caps in the US (1TB base scales with instance size and can buy extra). Most of my use outside my house is showing something off every now and them, or accessing some small files or uploading something (transfer limits are only for outgoing data).


  • Docker compose is great! Good luck!

    I’ve been moving from docker compose to podman, and I think that’s the better long term plan for me. However, the wins here are pretty marginal, so I don’t recommend it unless you want those marginal wins and everything is already in containers. IMO: Podman > docker compose >>>no containers. Docker compose has way better examples online, so stick with that until you feel like tinkering.


  • I went with Tuta because it’s my backup if everything else goes wrong. If my house burns down or my VPS shuts down my instance (e.g. billing fail, IP block ban, provider goes under, etc), I don’t want to lose access to my email.

    I use a custom domain for it, so if I ever need to, switching to a different provider should be as simple as swapping some domain configs.

    It’s relatively inexpensive too at €3/month when paying annually. I wanted two domains (one for personal, one for online stuff) and didn’t need any of the other stuff Proton has, so Tuta worked.




  • I’ve been testing out immutable distros, in this case openSUSE Aeon (laptop) and openSUSE MicroOS (server).

    I set up Forgejo and runners are working, all in podman. I’m about to take the plunge and convert everything on my NAS to podman, which is in preparation for installing MicroOS on it (upgrade from Leap).

    I also installed MicroOS on a VPS, which was a pain because my VPS provider doesn’t have images for it, and I’d have to go through support to get it added. Instead, I found a workaround, which is pretty amazing that it works:

    1. Install Alpine Linux (in my case I needed to provision something else first and mount an ISO to install Alpine, which was annoying)
    2. Download MicroOS image on VPS (not ISO, qcow image)
    3. Write image to the disk, overwriting the current OS (qemu-img command IIRC)
    4. Reboot (first boot takes longer since it’s expanding the disk and whatnot)

    The nice thing is that cloud-init works, so my keys set up in step 1 still work with the new OS. It’s not the most convenient way to set things up, but it’s about the same amount of time as asking them for an ISO.

    Anyway, now it’s the relatively time consuming task of moving everything from my other VPS over, but I’ll do it properly this time with podman containers. I had an ulterior motive here as well, I’m moving from x86 to ARM, which reduces cost somewhat and it can also function as a test bed of sorts for ARM versions of things I’m working on.

    So far I’m liking it, especially since it forces me to use containers for everything. We’ll see in a month or two how I like maintaining it. It’s supposed to be super low effort, since updates are installed in the background and applied on reboot.


  • It honestly depends on how you run things.

    If everything is in containers, chances are you’re already getting the benefits of a firewall. For example, with podman or docker, you already explicitly expose ports, which is already a form of firewall. If you’re running things outside of containers, then yeah, I agree with you, there’s too much risk of something opening up a port you didn’t expect.

    Everything I run is with podman, which exposes stuff with iptables rules. That’s the same thing a basic firewall does, so adding a firewall is superfluous unless you’re using it to do something else, like geoip filtering.

    When in doubt, use a firewall. But depending on the setup, it could be unnecessary.



  • Always wait a couple days before doing a big upgrade. These smaller projects tend to have patch releases pretty soon after a major release.

    I use Actual Budget, and they have had a .1 release within a day or so of pretty much every release since I’ve been using them.

    If you’re okay debugging some stuff, by all means, get the .0 right away and submit reports. But if you’re not going to do that, wait a couple days.


  • But is there a good reason to run one on a server? Any port that’s not in use won’t allow traffic in. Any port that’s in use would be added to the firewall exception anyway.

    The only reasons I can think of to use a firewall are:

    • some services aren’t intending to be accessible - with containers, this is really easy to prevent
    • your firewall also does other stuff, like blocking connections based on source IP (e.g. block Russia and China to reduce automated cyber attacks if you don’t have users in Russia or China)

    Be intentional about everything you run, because each additional service is a potential liability.