

3x minisforums MS-01
3x minisforums MS-01
A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.
You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.
especially once a service does fail or needs any amount of customization.
A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.
As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)
It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.
I would always run proxmox to set up docker VMs.
I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.
Also, why wouldn’t I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups
I’ve never installed a package on proxmox.
I’ve BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).
What would you install on proxmox?!
I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.
I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
For the next project that is similar, I’ll run talos inside proxmox VMs.
As far as “how does cloudflare work in k8s”… However you want?
You could manually deploy the example manifests provided by cloudflare.
Or perhaps there are some helm charts that can make it all a bit easier?
Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
https://github.com/adyanth/cloudflare-operator seems popular?
I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators
Interesting, I might check them out.
I liked garden because it was “for kubernetes”. It was a horse and it had its course.
I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.
I’m willing to re-evaluate my deployment stack, tbh.
I’ll definitely dig more into flux and ansible.
Thanks!
Oh, operators are absolutely the way for “released” things.
But on bigger projects with lots of different pods etc, it’s a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
Similar to helm charts, I don’t see the point for personal projects. I’m not sharing it with anyone, I don’t need helm/operator abstraction for it.
And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are “doing the right thing” before slinging it into k8s.
Everyone talks about helm charts.
I tried them and hate writing them.
I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.
Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
Though, if you are used to ansible, it might make more sense to use ansible.
Pretty sure ansible will be able to do it all in a way you are familiar with.
As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).
I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.
The way I use kubernetes for the projects I do is:
Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
Then apply all my pods, services, certificates etc from hand written manifests.
Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.
Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.
However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.
So uplink is 500/500.
LAN speed tests at 1000/1000.
WAN is 100/400.
VPN is 8/8.
I’m guessing the VPN is part of your homelab? Or do you mean a generic commercial VPN (like pia or proton)?
How does the domain resolve on the LAN? Is it split horizon (so local ip on the lan, public IP on public DNS)?
Is the homelab on a separate subnet/vlan from the computer you ran the speed test from? Or the same subnet?
Not if you use wildcard dns records.
Servers: one. No need to make the log a distributed system, CT itself is a distributed system.
The uptime target is 99%3 over three months, which allows for nearly 22h of downtime. That’s more than three motherboard failures per month.
CPU and memory: whatever, as long as it’s ECC memory. Four cores and 2 GB will do.
Bandwidth: 2 – 3 Gbps outbound.
Storage:
3 – 5 TB of usable redundant filesystem space on SSD or.
3 – 5 TB of S3-compatible object storage, and 200 GB of cache on SSD.
People: at least two. The Google policy requires two contacts, and generally who wants to carry a pager alone.
Seems beyond you typical homelab self hoster, except for the countries that have 5gbps symmetric home broadband.
If anyone can sneak 2-3gbps outbound pass their employer, I imagine the rest is trivial.
Altho… “At least 2 [people]” isn’t the typical self hosting
Edit:
Tried to fix the copy/paste.
Also will add:
https://crt.sh/
Has a list of all certificates issued.
If you are using LE for every subdomain of your homelab (including internal), maybe think about a wildcard cert?
One of those “obscurity isn’t security”, but why advertise your endpoints? Also increases privacy (IE not advertising porn(dot)example(dot)com)
Why do you dislike PHP?
Unless your home internet is CG-NAT, both have a publicly accessible IP address, so both will be scanned
Who is externally reaching these servers?
Joe public? Or just you and people you trust?
If it’s Joe public, I wouldn’t have the entry point on my home network (I might VPS tunnel, or just VPS host it).
If it’s just me and people I trust, I would use VPN for access, as opposed to exposing all these services publicly
Nothing better than a properly formatted data file.
Self hosting teaches you this
The commands you used to start the docker containers, or the docker compose contents.
That’s what dictates how much “power” a docker container has
Yeh, I took “don’t agree or disagree” to be the N/A.
It seemed the most neutral.
I don’t really use anything for bookmark sharing/management. So I don’t strongly disagree or strongly agree with self hosting it.
Chisel, Rathole, an SSH tunnel with port forwarding, a VPN with port forwarding.
Keywords are “self hosted tunnel” or “reverse proxy over VPN”.
Run a VPS for like $5 a month, your local reverse-proxy tunnels out to the VPS, and your VPS forwards port 80/443 over the tunnel to your reverse-proxy.
Ah, fair.