I’m using Nextcloud as well, but I’ll admit that it’s probably a bit heavy if all one needs is a calendar.
I’m using Nextcloud as well, but I’ll admit that it’s probably a bit heavy if all one needs is a calendar.
As much as I agree, I think we’re past the point of preventing normalization.
You still have 63% RAM available in that screenshot, there are zero problems with Java using 13% RAM. It’s the same as the tired old trope of “ChRoMe Is EaTiNg My MeMoRy”. Unused memory is wasted memory if it can be used for caching instead, so unless you’re running out of available memory, there is no problem.
Also, the JVM has a lot of options for configuring its various caches as well as when it allocates or releases memory. Maybe take a look at that first.
Edit: Apparently people don’t want to hear this but don’t have any actual arguments to reply with. Sorry to ruin your “JaVa BaD” party.
I use Backblaze B2 for one offsite backup in “the cloud” and have two local HDDs. Using restic with rclone as storage interface, the whole thing is pretty easy.
A cronjob makes daily backups to B2, and once per month I copy the most current snapshot from B2 to my two local HDDs.
I have one planned improvement: Since my server needs programmatic access to B2, malware on it could wipe both the server and B2, leaving me with the potentially one-month old local backups. Therefore I want to run a Raspberry Pi at my parents’ place that mirrors the B2 repository daily but is basically air-gapped from the server. Should the B2 repository be wiped, the Raspberry Pi would still retain its snapshots.
syncthing also relies on a web server for device discovery, it’s just that you’re probably using someone else’s server instead of hosting your own.
Correct me if I’m wrong, but I also think that Vaultwarden itself doesn’t have access to the unencrypted password database. In that sense it’s E2EE similar to KeePass, the only difference being that KeePass is a desktop app and Vaultwarden a web app.
Nothing, this is not about that.
This change gives you the guarantee that .internal
domains will never be registered officially, so you can use them without the risk of your stuff breaking should ICANN ever decide to make whatever TLD you’re using an official TLD.
That scenario has happened in the past, for example for users of FR!TZBox routers which use fritz.box
. .box
became available for purchase and someone bought fritz.box
, which broke browser UIs. This could’ve even been used maliciously, but thankfully it wasn’t.
Being in alpha and having breaking changes is fine, the question is how many. My impression is that Immich seems to introduce breaking changes far more frequently than what people might be used to from other projects.
And that does go back to professionalism: The better you plan ahead, the fewer breaking changes you have to impose on your users.
Minio now describes itself as “S3 & Kubernetes Native Object Storage for AI” - lol
Guess it’s time to look for alternatives if you’re not doing ML stuff
Wouldn’t (30px)²
be 30*30*px*px
and thus 900px²
?
I wouldn’t call criticism of their strategic focus “shitting on” Nextcloud. It obviously still does a lot of things right or at least right enough to be useful and relevant to many people, or else we wouldn’t be discussing it. But it has its issues and many of them have been unadressed for a long time, so why shouldn’t people voice their displeasure with that?
There are quite a few mature projects in 0.x that would cause a LOT of pain if they actually applied semver
Depending on how one defines the “initial development” phase, those projects are actually conforming to semver spec:
Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.
After looking at the site and trying to determine what to download to get Debian with non-free (I’m unfortunately working with an NVIDIA card)
FWIW, Debian 12 now includes non-free firmware in the installation media by default and will install whatever is necessary.
I agree that the Debian website has its weaknesses, but beyond finding the right installer (usually netinst ISO a.k.a small installation image on https://www.debian.org/distrib/) there isn’t much of a learning curve. I started out with Ubuntu too, but finally decided that enough was enough when snap started breaking my stuff on desktop.
Thanks, didn’t know about those deals!
+1 for own domain and some email hosting service. That also makes it pretty easy to switch providers because you can simply point your MX records etc. somewhere else - no need to change the actual email address.
I can also recommend mailbox.org as an alternative to mxroute, they’re even a little cheaper at $3/month (mxroute is $49/year at minimum).
Oh, I think we’re talking different orders of magnitude here. I’m in the <1TB range, probably around 100GB. At that size, the cost is negligible.
I do an automated nightly backup via restic to Backblaze B2. Every month, I manually run a script to copy the latest backup from B2 to two local HDDs that I keep offline. Every half a year I recover the latest backup on my PC to make sure everything works in case I need it. For peace of mind, my automated backup includes a health check through healthchecks.io, so if anything goes wrong, I get a notification.
It’s pretty low-maintenance and gives a high degree of resilience:
restic has been very solid, includes encryption out of the box, and I like the simplicity of it. Easily automated with cron etc. Backblaze B2 is one of the cheapest cloud storage providers I could find, an alternative might be Wasabi if you have >1TB of data.
drive failure
Perhaps unintended but very much relevant singular. Unless you’re doing RAID 6 or the like, a simultaneous failure of two drives still means data loss. It’s also worth noting that drives of the same model and batch tend to fail after similar amounts of time.
Once again, you’re going off on an unrelated tangent. If you don’t want to listen, I can’t help you. We’re done here.
Funny how you claim to know so much about security but can’t even seem to comprehend my comment. I know root shell exploits exist, that’s why I wrote that it takes additional time to get root access, not that it’s impossible. And that’s still a security improvement because it’s an additional hurdle for the adversary.
Definitely, that’s what I’m doing as well. I’ve found some to be lacking for my needs (e.g. music), but most of them are good enough for most use cases.