• 1 Post
  • 412 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Dude - you gotta get off the snap hate train for a bit.

    Do you not understand the difference between “hey, run this rando shell script on the internet” and “hey, use this standardized installer which may run some shell scripts”?

    I don’t give a shit about all the canonical hate. For me snap does what I want:

    1. Installs things in a standardized way using a standard interface i can easily script with ansible
    2. Provides a similarly standardized way of upgrading and uninstalling that can also be automated easily with ansible
    3. Works “just fine”.
    4. Edit - I’ll add in a fourth - creates a fucking binary I can run (no flatpak run something.something.something BS)

    It’s not bash I’m criticizing. Do you understand that? Because stop reading if you don’t and go back through my list. I’ll wait.

    So good - you get that bash isn’t the problem. It’s the bespoke unstructured installer/upgrader/unisntaller part that is bad. You could write your installer in C, Python, etc. and I’ll levy the same complaints. You want me to install your python app? It should be available through pypi and pip. Not some rando bespoke installer.






  • I really want to push back on the entire idea that it’s okay to distribute software via a curl | sh command. It’s a bad practice. I shouldn’t be reading 100’s of lines of shell script to see what sort of malarkey your installer is going to do to my system. This application creates an uninstall script. Neat. Many don’t.

    Of the myriad ways to distribute Linux software (deb, rpm, snap, flatpak, AppImage) an unstructured shell script is by far the worst.



  • Yeah - I did come down a bit harder on helm charts than perhaps I intended - but starting out with them was a confusing mess for me. Especially since they all create a new ‘custom-to-this-thing’ config file for you to work with rather than ‘standard yml you can google’. The layer of indirection was very confusing when I was learning. Once I abandoned them and realized how simple a basic deployment in k8s really is then I was able to actually make progress.

    I’ve deployed half a dozen or so services now and I still don’t think I’d bother with helm for any of it.


  • Yeah - k8s has a bit of a steep learning curve. I recentlyish make the conversion from “a bunch of docker-compose files” to microk8s myself. So here are some thoughts for you (in no particular order).

    I would avoid helm like the plague. Everybody is going to recommend it to you but it just puts a wrapper on a wrapper and is MUCH more complicated than what you’re going to need because you’re not spinning up hundreds of similar-but-different services. Making things into templates adds a ton of complexity and overhead. It’s something for a vendor to do, not a home-gamer. And you’re going to need to understand the basics before you can create helm charts anyway.

    The actual yml files you need are actually relatively simple compared to a helm chart that needs to be parameterized and support a bazillion features.

    So yes - you’re going to create a handful of yml files and kubectl apply -f them. But - you can do that with Ansible if you want, or you can combine them into a single yml (separate sections with ----).

    What I do is - for each service I create a directory. In it I have name_deployment.yml, name_service.yml, name_ingress.ymlandname_pvc.yml`. I just apply them when I change them, which isn’t frequent. Each application I deploy generally has its own namespace for all its resources. I’ll combine deployments into a NS if they’re closely related (e.g. prometheus and grafana are in the same NS).

    Do yourself a favor and install kubens which lets you easily see and change your namespace globally. Gawd I hate having to type out my namespace for everything. 99% of the time when you can’t find a thing with kubectl get you’re not looking in the right namespace.

    You’re going to need to sort out your storage situation. I use NFS for long-term storage for my pods and have microk8s configured to automatically create space on my NFS server when pods request a PV (persistent volume). You can also use local directories but that won’t cluster.

    There are two basic types of “ingress” load balancing. “ClusterIp” means the cluster controller will act like a hostname-based router for HTTP. You can point your DNS entries at that server and it will route to your pods on their internal IP address based on the DNS name of the request. It’s easy to use and works very well - but it only works for HTTP traffic. The other is to use LoadBalancerIp that will give your pods an IP address on the network that you can connect to directly. The former only works for HTTP, the latter will let you use any ports (e.g. ssh for a forgejo instance).












  • You got the basic idea from other posters, but there’s also a lot of weird crap in there as well.

    Basically you only need multiple IPs when dealing with services that only really operate on “well known ports”. DNS and SMTP being the usual culprits. For most home users there this is no big deal - even if you wanted to host those services it’s unlikely that you would need more than one ip to do so. HTTP solved this in '97 with HTTP/1.1 which allowed for host headers, which let’s a single server host multiple sites.

    This isn’t something new that nginx solved. 😂