• 3 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • I somewhat agree on your comment about documentation and UI (altough once you get used to it, it’s manageable) but just to add with my experience on these things: for me they’ve been rock solid. I’ve used them both at home and professionally (mostly on small-ish networks) for at least 10 years and they just seem to run just fine.

    Currently my home router is RB4011iGS+ and there’s been absolutely no problems with it in the 4-5 years it’s been on my network. I’m not saying all their models are as reliable and there’s not that many models I’ve had my hands on, but my experience with them has so far been pretty good.


  • You’ll get used to it eventually

    I’ve been earning my living mostly with connecting to remote systems via ssh (and other means) for quite a few years and I still occasionally mess up and enter commands on a wrong terminal. Less now than I used to, but it still happens. The trick is to learn youself to pause for a second and confirm the target for any potentially destructive or otherwise harmful command, no matter if it’s locally or to some server other side of the world.


  • Do they really care enough to check your info manually if you don’t use your domain name for malicious purposes?

    Depends on TLD how strict the checks are, but generally you’re at least violating TOS by doing it and can lose your domain should someone actually check the info. A lot of registrars provide at least whois-security, so they’ll know your real details but won’t share them openly to anyone who asks. I assume if you get into something illegal and court orders to release the data then they’ll happily comply instead of hurting their own business.

    But if you just want to keep your real name and address out of the internet, that would be enough at least for me.



  • I did self-host bitwarden and it’s not that bad to keep updated and running after initial setup (including backups obviously) but it still requires some time and effort to keep it running. And as I was the only user for the service it just wasn’t worth the time spent for me (YMMV) so I switched to their EU servers and I’ve been a happy user ever since.

    What I should do is to improve local backps on that, currently I just export my data every now and then manually to a secured storage, but doing it manually means that there’s often too long time between exports.



  • You could get around with a normal file share service (assuming you already are using one) via tinyurl or similar redirect. I don’t know how much the free services track you or if they have other security implications, but I have couple of domains laying around and it would be pretty trivial to just create HTTP redirect from “class-a.up.mydomain.foo” to my nextcloud upload link.



  • That’s something along the lines I do as well, but your methods are far more in depth than mine. I just glance around documentations, how active the development is and get a rough idea if the thing is just a single person hobby-project or something which has a bit more momentum.

    And it of course also depends on if I’m looking for solutions just for myself or is it for others and spesifically if it’s work related. But full audits? No. There’s no way my lifetime would be enough to audit everything I use and even with infinite time I don’t have the skills to do that (which of course wouldn’t be an issue if I had infinite time, but I don’t see that happening).


  • Is my current set up secure, assuming strong passwords were used for everything?

    Network security is a complicated beast to manage. If general public can access your services over the internet, that’s a threat you need to mitigate. Strong passwords is a good start on that, but it doesn’t take into account if there’s a flaw or bug on the service you’re running. Also if you have external users, they might reuse their passwords and leak for those might cause a threat too, specially if there’s privilege escalation bugs on the software you’re running.

    And so on, it’s a too wide field to cover in a short comment here, but when you’re building your stuff, and what is maybe the most disticntive feature on a good professional between a not so good one, is to think ahead and prepare for every imaginable scenario where something goes wrong. Every time you add a way to access your network, no matter how minuscle, think what happens if that way gets compromised and what it might mean on the very worst case.

    Maybe you want to add another access point to your network since your terrace isn’t properly covered. That’s nice to have, but now everyone around 100 meters around your house/apartment might have access to your stuff if they can break your wifi security. Maybe you set up a reverse proxy or tailscale on the stack. Now the whole internet can at least probe your stuff and try to find vulnerabilities, try to use stolen credentials and even try to social engineer their way into your stuff. Or maybe you made an mistake and left something open that shouldn’t be.

    I’m not trying to scare you off out of anything. Go ahead and play with your stuff, break things, learn how to fix them, have fun while doing it. Just remember to think ahead about worst case scenarios, weigh their risks, think ahead and then go on. Learn about DNAT, reverse proxies, VPN and network layers and whatever you come across on your adventure but keep in mind that shit will hit the fan at some point. And learn to accept that, learn from your mistakes and do better next time.





    1. VM running on a proxmox host. Tips: make sure you know your backups are in a state you can restore data from them.
    2. Nightly backup via proxmox to Hetzner Storage box with 2 day retention. I’d like a local copy too but I don’t currently have hardware for it.
    3. Don’t know. Personally I have a DNAT rule on firewall and my instance is directly open to the internet. You might not want that and I might not recommend it, but right now, for me, it works. I’d need to look in a VPN solution for android I could replace the current ‘open for all’ situation.


  • How much RAM your system has? Zfs is pretty hungry for memory and if you don’t have enough it’ll have quite significant impact on performance. My proxmox had 7x4TB drives on zfs pool and with 32 gigs of RAM there was practically nothing left for the VMs under heavy i/o load.

    I switched the whole setup to software raid, but it’s not offically supported by proxmox and thus managing it is not quite trivial.



  • The exchanged mails between the IMAP host and the MTA need a unique identifier to organize contents of the DB, and this would not be possible or automatic if your switched the upstream MTA.

    It sure is possible. I’ve copied maildirs over different software, different servers, local copies back to the server and so on. Also if you just rely on your own IMAP server the upstream doesn’t matter as fetchmail (or whatever you choose to use) anyways communicates between hosts on their preferred protocols.

    Obviously there’s a tradeoff since now you’re responsible for your backups and maintaining your server, but it can sit nicely on your private LAN with access only locally or via VPN without direct access to the internet. And you don’t need MTA to run IMAP server in the first place.