Thanks for the motivation to make sure my backups (which are totally configured properly) are working. :)
Mama told me not to come.
She said, that ain’t the way to have fun.
Thanks for the motivation to make sure my backups (which are totally configured properly) are working. :)
Just make a different API prefix that’s secure and subject to change, and once the official clients are updated, deprecate the insecure API (off by default).
That way you preserve backwards compatibility without forcing everyone to be insecure.
The admin could use a CDN and not worry about it, if it’s just static content.
Agreed, with the clear exception being PHP, which often requires configuring a web server.
I hadn’t heard of it, and looking into quarkus just reminded me of how complicated the whole Java ecosystem is. Gross.
Hosting Go, Rust, etc stuff is dead simple, but with Java, there’s all this complexity…
Yeah, containers are great! It’s really nice knowing exactly which directories to move if I need to rebalance my services onto other hardware or something.
Most of my services are on my NAS, so I have this setup:
Disaster recovery should be as simple as:
I use specific container versions, so I should get exactly the same setup.
I’m going to be reinstalling my NAS soon (boot drive is getting old), so we’ll see how this process works, though I’ll skip step 1 since I’m keeping the drives.
That really depends on your use case. I use very little transfer because most of my usage is within my LAN. I set up a DNS server (built in to my router) to resolve my domains to my local servers, and all the TLS happens on my local server, so it never goes out to the VPS. So I only need enough transfer for when I’m outside my house.
Here’s my setup:
My devices use my network’s DNS, but if that fails, they fall back to some external DNS and route traffic through the VPS.
VPSs without data caps tend to have worse speeds because they attract people who will use more transfer. I think it’s better to find one with a transfer cap that’s sufficient for your needs, so things stay fast. I use Hetzner, which has generous caps in the EU (20TB across the board) and good enough for me caps in the US (1TB base scales with instance size and can buy extra). Most of my use outside my house is showing something off every now and them, or accessing some small files or uploading something (transfer limits are only for outgoing data).
Docker compose is great! Good luck!
I’ve been moving from docker compose to podman, and I think that’s the better long term plan for me. However, the wins here are pretty marginal, so I don’t recommend it unless you want those marginal wins and everything is already in containers. IMO: Podman > docker compose >>>no containers. Docker compose has way better examples online, so stick with that until you feel like tinkering.
I went with Tuta because it’s my backup if everything else goes wrong. If my house burns down or my VPS shuts down my instance (e.g. billing fail, IP block ban, provider goes under, etc), I don’t want to lose access to my email.
I use a custom domain for it, so if I ever need to, switching to a different provider should be as simple as swapping some domain configs.
It’s relatively inexpensive too at €3/month when paying annually. I wanted two domains (one for personal, one for online stuff) and didn’t need any of the other stuff Proton has, so Tuta worked.
I’m considering Keycloak myself because it’s trusted by security professionals (I think it’s a RedHat project), whereas Authentik is basically a passion project.
Absolutely. I used Tailscale for a bit because I didn’t want to get a VPS (I’m behind CGNAT), but I needed to expose a handful of services and use my own domain name, and I couldn’t figure that out w/ Tailscale. So I bought a cheap VPS and configured WireGuard on it to get into my LAN and I’m much happier.
I’ve been testing out immutable distros, in this case openSUSE Aeon (laptop) and openSUSE MicroOS (server).
I set up Forgejo and runners are working, all in podman. I’m about to take the plunge and convert everything on my NAS to podman, which is in preparation for installing MicroOS on it (upgrade from Leap).
I also installed MicroOS on a VPS, which was a pain because my VPS provider doesn’t have images for it, and I’d have to go through support to get it added. Instead, I found a workaround, which is pretty amazing that it works:
The nice thing is that cloud-init works, so my keys set up in step 1 still work with the new OS. It’s not the most convenient way to set things up, but it’s about the same amount of time as asking them for an ISO.
Anyway, now it’s the relatively time consuming task of moving everything from my other VPS over, but I’ll do it properly this time with podman containers. I had an ulterior motive here as well, I’m moving from x86 to ARM, which reduces cost somewhat and it can also function as a test bed of sorts for ARM versions of things I’m working on.
So far I’m liking it, especially since it forces me to use containers for everything. We’ll see in a month or two how I like maintaining it. It’s supposed to be super low effort, since updates are installed in the background and applied on reboot.
It honestly depends on how you run things.
If everything is in containers, chances are you’re already getting the benefits of a firewall. For example, with podman or docker, you already explicitly expose ports, which is already a form of firewall. If you’re running things outside of containers, then yeah, I agree with you, there’s too much risk of something opening up a port you didn’t expect.
Everything I run is with podman, which exposes stuff with iptables rules. That’s the same thing a basic firewall does, so adding a firewall is superfluous unless you’re using it to do something else, like geoip filtering.
When in doubt, use a firewall. But depending on the setup, it could be unnecessary.
I don’t use Watchtower, but many images follow semver so setting the version to something that you feel comfortable with can help.
I use podman and set my images to auto-update, and I just use a tag that’s broad enough to probably not break stuff with the auto-updates.
Always wait a couple days before doing a big upgrade. These smaller projects tend to have patch releases pretty soon after a major release.
I use Actual Budget, and they have had a .1 release within a day or so of pretty much every release since I’ve been using them.
If you’re okay debugging some stuff, by all means, get the .0 right away and submit reports. But if you’re not going to do that, wait a couple days.
But is there a good reason to run one on a server? Any port that’s not in use won’t allow traffic in. Any port that’s in use would be added to the firewall exception anyway.
The only reasons I can think of to use a firewall are:
Be intentional about everything you run, because each additional service is a potential liability.
Idk about OP, but I want to run all of my exposed services in containers for security benefits, and I use samba to provide an SMB share for Windows clients.
What’s so nice about it? Have you tried quadlets or docker compose? Could you give a quick comparison to show what you one like about it?
My main complaint about Quadlet is that resources for it are fairly limited, so thanks for linking this post!
And yes, quadlet would definitely make managing Plex easier. That post seems to hit all the gotchas I’ve run into, so definitely consult it as you run into issues.
Also depends on the storage medium (SD? SSD?), assuming there’s no transcoding.