• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle
  • kinda the same reason people suggest something like linux mint over slackware, gentoo, arch, etc… mint is easy to install and is preconfigured to be an easy to use user desktop environment. you can configure any other option to be have like that, but they tend to be a bit more “DIY”, which is great if you know what you’re doing!

    dedicated NAS OSes will have good software out of the box that make it easy to configure and manage various common disk-related configurations (RAID, SMB, NFS, etc). you can certainly do all this yourself, but it might not have a pretty, unified user interface, or you might have to deal with software that isn’t compatible with some version of a library that’s in your distro of choice… all resolvable things, but they take time to solve: anywhere from installing a package manually to applying a kernel patch and recompiling the kernel to get something to work









  • so what you ideally want is people to ONLY be able to access your backend service through caddy, so caddy should be the only one with ports publicly accessible, yes

    caddy running in the same docker network as your services can talk to those services on their original ports; they don’t need to even be mapped to the host! in this case, you have 3 containers: caddy, service 1, service 2… caddy is the only one that needs to have ports forwarded and you can just forward caddy:443 and no need to worry! then caddy can talk directly to services:80 or services:443 (docker containers show up to other docker containers by their container name! so if you run eg: docker run … —name lemmy, then caddy in the same docker network would be able to connect to http://lemmy:80!)

    … but if you forward say service 1 and 2 on :8443 and :9443 (without firewall, and even with it makes me uncomfortable - that’s 1 step away from a subtle security problem), someone could be able to access <yourserver>:8443, right? so they don’t have to go through caddy to get to the backend service… for some services, that can be a big deal in ways that it’s difficult to understand, so it’s best to just not allow it if possible

    an alternative is to make sure your services are firewalled so that nobody from the internet can hit them, but caddy still can… but i like this less, because it’s less explicit what’s happening so it’s easier to forget about


  • if you’re only going to be using those services through the proxy, it can also be a useful security upgrade to not forward their ports at all, and run caddy inside docker to connect to them directly!

    if you forward the ports (without firewalling them), people can connect to them directly which can be a security risk (for example, many services require a proxy to add the x-forwarded-for header to show which IP address originally made the request… if users can access the service directly, they can add this header themselves and make it appear as though they came from anywhere! even 127.0.0.1, which can sometimes bypass things like admin authentication)



  • you rely on centralised entities every day to use the internet… ICANN, IANA, and a few more right at the top, government agencies to manage IP ranges etc, whoever owns your IP block, whoever provides your network… TBH you rely on cloudflare even if you never pay them because they CDN half the damn internet. you reply on google and amazon simply because again they host services you use

    don’t kid yourself, the internet works because of centralised bodies; not despite them! DNS is the least of your concern; at least those names are commoditised and have enough scrutiny (unless you choose a TLD that doesn’t have favourable TOS) BY those centralised authorities that they’re pretty untouchable short of legal challenges


  • useful thing to remember about these systems: you fuck up and it’s a high likelihood literally nobody at the company can do any work because all their files are inaccessible

    that’s like… $10000/hr in lost man hours alone, let alone reputation from not being able to respond to customers accurately, possibly missed SLAs or other contract obligations

    unless your company is all about tech, it’s highly unlikely your IT team has the skills necessary to take on that level of responsibility



  • sure! in the fediverse, the content of users and communities is stored on the servers of the actor you’re interacting with. for now, i’m just going to specifically refer to microblogging and user<->user interactions because that’s much simpler than the many different ways that the threadiverse interactions happen

    so, if you send a message to a threads user, or that user interacts (likes, etc) your content then that data is stored on metas servers. heck, technically they don’t even need to interact: meta can just suck up all the data and use it for their analytics!

    by contributing content to the fediverse on an instance that doesn’t defederate from, or you haven’t blocked threads some other way, your content is likely to be ingested by meta and go through their data processing…

    you might not be using the threads UI, but threads is using you that’s for sure


  • it shouldn’t pummel your bandwidth from what i understand: your instance will receive all updates and data only from things you follow; not the entire fediverse!

    think of it kind of like just reading everything posted to every magazine you subscribe to!

    it’s text and a few images: a single youtube video is probably bigger than a day of your fediverse subs

    … assumptions and educated guesses above :)



  • people can choose not to interact with things that are bad for them, and bad for the group (the fediverse as a technology platform) sure

    … just like people can choose to ignore misinformation
    … or vote in their best interests

    it’s definitely a fine line! but let’s not kid ourselves: people aren’t always rational actors, and refusing to admit that is dangerous