

Clearly you don’t know.


Clearly you don’t know.


If I wanted to run updates frequently I would run arch lmao. Even if I did apt update every day, debian stable doesn’t get that many updates.
You’re not updating for features you’re updating for bug and security fixes. That’s why Debian stable doesn’t have many updates. But the ones they do are typically important.


That’s… Not how it works… Debian is “stable” not “secure”. You use Debian so that is easier to run updates frequently since they’ll be unlikely to break things.


All systems, daily via a single ansible script. That’s apt update, upgrade and reboot if needed (some systems set to only reboot with a separate script so I can handle them separately).
Rarely have any sort of problems.


Sounds like you bookmarked the while flippin’ Internet.


Something that can make troubleshooting DNS issues a real pain is that there can be a lot of caching at multiple levels. Each DNS server can do caching, the OS will do caching (nscd), the browsers do caching, etc. Flushing all those caches can be a real nightmare. I had issues recently with nscd causing issues kinda like what you’re seeing. You may or may not have it installed but purging it if it is may help.


It’s not resolving, play around with dig a bit to troubleshoot: https://phoenixnap.com/kb/linux-dig-command-examples
I’d start with “dig @your.providers.dns.server your. domain.name” to query the provider servers directly and to see if the provider actually responds for your entry.
If so then it may be that you haven’t properly configured the provider to be authoritative for your domain. Query @8.8.8.8 or one of the root servers. If they don’t resolve it then they don’t know where to send your query.
If they do, the problem is probably closer to home either your local network or Internet provider.


This is an awful analogy…
squeezing every last drop of resource form tired old hardware
This is such a myth. 99% of the time your hardware is doing there doing nothing. Even when running “bloated” services.
Nextcloud, for example, uses practically zero cpu and a few tens on mb when sitting around yet people avoid it for “bloat”.


Oh for sure - containers are fantastic. Even if you’re just using them as glorified chroot jails they provide a ton of benefit.


Containers run on “bare metal” in exactly the same way other processes on your system do. You can even see them in your process list FFS. They’re just running in different cgroup’s that limit access to resources.
Yes, I’ll die on this hill.


Could last years? Or months? Depends on a lot of factors. Fans may not like running 24x7, memory could fail, etc.
Just be prepared for what you would do if it does.


Since it’s a public instance you’d want to be sure to keep it pretty up-to-date with new system patches and the latest stable versions of Nextcloud. If you’re comfortable with automating updates with ansible, k8s, docker-compose, etc. then it’s not a big deal. If you’re ssh’ing to a server to manually update things then it’s going to be a lot of overhead and likely forgotten.
Old hardware may also bring its own issues and you’ll need backups especially since old hardware (especially consumer-grade stuff) can fail very unexpectedly. And providing support for users is a whole… other thing…
I like the idea of starting with the “old laptop in a basement” approach as a way to get things going to see if the service provides benefit then look to migrate to a more stable platform in the future.
“I run an immutable distro, BTW”
Proxmox or Docker?
It’s not mutually exclusive? I have a 3-node proxmox config on which I have 3 VMs running as kubenetes nodes to which I deploy containers. I also have some VMs setup for things which either don’t work well as containers or which I simply don’t want as containers (e.g. a couple Windows VMs for doing Windows things). Also home assistant runs in a VM since it was just easier to do USB passthrough this way.
I understand that running things in a VM provides better security than running them in a container.
Not sure what you mean by this - containers are typically easier to secure as they’re minimalist. But I doubt anyone is using VMs because they think they’re more secure.


And I still don’t care. Bad is bad even if a community is doing it.
Edit: Sorry if that was aggressive. This is a horrible practice and that community is the worst. They use HTTP by default? Encourage running scripts pointing to GH repositories controlled by community members? It’s just aching for the sort of supply-chain attacks we’re seeing with things like NPM has been enduring.


I have a very no-exceptions rule about encouraging people to do a curl|bash install and would just remove that. Provide a link to the script, people can run it if they want. Encouraging the behavior of just directly running scripts off the internet is a bad habit.


In your Proxmox console, enter the following command: bash -c "$(curl -fsSL https://raw.githubusercontent.com/…)
Do not do this. Never run scripts like this directly without inspecting them first. Do not tell people to run your exciting new script like this. Provide a link to the script and encourage users to inspect it first then run it.
Same? HTTP/1.1 ran the entire internet for 20 years and is used by a ton of sites. It’s fine for a personal website.
I don’t.