I am using the smallest tier VPS from IONOS for 1€/month. Good, reliable and trustworthy as it is a subsidiary of 1&1 telecommunications.
I am using the smallest tier VPS from IONOS for 1€/month. Good, reliable and trustworthy as it is a subsidiary of 1&1 telecommunications.
Sure it’s easy to set up, but the same behaviour is what I get with my handrolled solution. I rent a cheap VPS with a fixed IP solely for forwarding all traffic through wireguard. My DNS entries all point to the VPS and my servers connect to the VPS to be reachable. It is absolutely network agnostic and does not require any port shenanigans on the local network nor does it require a fixed IP for the internet connection of my home server.
Data security wise the HTTPS terminates on my own hardware (homeserver with reverse proxy) and the wireguard connection is additionally encrypted. There are no secrets or certificates on the rented VPS beyond the bare minimum for the wireguard tunnel and my public key for SSH access.
Shuttling the packets on the VPS (inet to wireguard) is done by socat because I haven’t had the will or need to get in the weeds with nftables/iptables. I am just happy that it works reliably and am happy to loose some potential bandwidth to the kernelspace/userspace hoops.
There’s prometheus node exporter which can collect such data from several hosts. You can hook it up with Grafana for neat dashboards and I’m almost sure it also integrates with Homeassistant.
What? I’ve never had the feeling that nextcloud assumes that. Are you using a special all-in-one docker image? Because I am using the regular one and pair it with db, redis etc. containers and am absolutely happy with it.
Maybe get a reputable one, the other ones are sadly malware infected in way to many cases. It’s a way for the manufacturer to make an extra buck from the sale.
If you have an AVM Fritz!Box home router you can simply create a new profile that disallows internet access and set the devices you want to “isolate” to that profile. They will be able to access the local network and be accessed by the local network just fine, but they won’t have any outgoing (or incoming) connectivity.
If only modern kernels weren’t a problem. I wish you could just install new OSs like on PC.
I’ve used restic before and it worked great with OVH’s object storage. Moved away from cloud backups because of the cost though.
Yeah, has anyone ever actually tried restoring from then? I only remember one disgruntled redditor posting about it, but that’s about it.
Depends a lot on what backup software you use. Blackbase B2 ist just an S3-like object storage service. It’s the underlying software stack of many different things, one of those can be backup software. They do have their own backup solution though. But in that case B2 is the wrong product for you to look at.
But Borg does not work with object storage, it needs a borg process on the receiving side.
I am very happy with mine and have only ever had one hiccup during updating that was due to my Dockerfile removing one dependency to many. I’ve run it bare metal (apache, mariadb) as well as containerized (derived custom image, traefik, mariadb). Both were okay in speed after applying all steps from the documentation.
Having the database on your fastest drive is definitely very important. Whenever I look at htop while making big copies or moves, it’s always mariadb that’s shuffling stuff around.
In my opinion there are 2 things that make nextcloud (appear) slow:
Managing the ton of metadata in the db that is used by nextcloud to provide the enhanced functionality
It is/was a webpage rendered mostly on the server.
The first issue is hard to tackle, because it is intrinsic and also has different optimums for different deployment scales. Optimizing databases is beyond my skillset and therefore I stick to the recommendations.
The second issue is slowly being worked around, because many applications on nextcloud now resemble SPAs, that are highly interactive and are rendered by your browser. That reduces page reloads and makes it feel more smooth.
All that said, I barely use the webinterface, because I rarely use the collaboration features. If I have to create a share I usually do that on the app because that’s where I send the link to people. Most of my usecase is just syncing files, calendars and contacts.
That might be due to your ISP’s routing and interconnects. They usually have good routes to big services and might lack good connections between home users in different countries or on different continents.
I did too, but shortly after decommissioning that server the drive became unresponsive. I really dodged a bullet without even realizing at the time. SMART data did not work and may have alerted me in that case.
Also, unrelated to SMART data, the server failed to do reboots because the USB-SATA adapter did not properly reset without a full power cycle (which did not happen with that mainboard’s USB on reboots). It always git stuck searching for the drive. Restarting the server therefore meant shutting it down and calling someone to push the button for me - or use Wake-On-LAN which thankfully worked but was still a dodgy workaround.
From what I read online that can lead to instabilities and was therefore disabled on Linux.
And you typically don’t get SMART-Data from USB-adapters.
Have a look into the logs of nc and see if it complains about a trusted proxy or similar. The ip range within a container network often changes between resstarts and that was a problem for me with my reverse proxy setup.
+1 for MTU and persistent keepalive. The last one helps if the connection is lost after a certain amount of time and does not recover, the first is often the problem when connection is intermittent or just “weird”.
Setting MTU requires knowing the MTU of your connection. Many ISPs provide IPv4 encapsulated in IPv6 protocol (Dual Stack Lite, I believe), meaning that from the regular package size you have to deduct the overhead of the encapsulation and if I remember correctly, also the package overhead for wireguard.
Yeah that made a massive dufference for me. Then again, it was unshielded cable so what did I expect?
Yes, I do loose the origin IP and I’m a little bugged by it. It also means that ALL traffic incoming on a specific port of that VPS can only go to exactly ONE private wireguard peer. You could avoid both of these issues by having the reverse proxy on the VPS (which is why cloudflare works the way it does), but I prefer my https endpoint to be on my own trusted hardware. That’s totally my personal preference though.
I trust my VPS provider to not be interested enough in my data to setup special surveillance tooling for each and every possible software combination their customers might have. Cloudflare on the other hand only has their own software stack to monitor and all customers must adhere to it. It’s by design much easier for them to do statistics or snooping.