

Exactly my thoughts too. Lots of theory about why it won’t work, but not looking at the fact that if people use it, maybe it does work, and when it won’t work, they will stop using it.
Exactly my thoughts too. Lots of theory about why it won’t work, but not looking at the fact that if people use it, maybe it does work, and when it won’t work, they will stop using it.
But the estimation is with each NC instance with half a CPU and 1GB of memory.
This is a super conservative estimation, that doesn’t include anything besides a tiny Fargate deployment and Aurora instances.
Edit: fargate ($40/month), the tiniest Aurora instances at 20% utilization and with merely 50GB storage ($120/month). Missing s3, which will easily cost $50 in storage and transfer (for only a few TB), ALBs and network traffic, especially outbound (easily $50-100 depending on volumes).
This basic solution’s real cost is already between $150 and $300/month. I don’t know NC enough to understand volumes on DBs and all usage, but I assume that it’s going to be lots of data in and out (backups, media, etc.). —edit—
For a heavily used NC instance (assuming a company offering it as a service), the cost is going to become massive pretty fast.
Also, as I side note, if a company is offering NC as a service, but doesn’t manage a single piece of NC deployment… What is the company product? And most importantly, how are they going to make money when AWS is going to eat a linearly scalable chunk of their revenue forever?
Well yeah, wouldn’t break the bank, but a conservative cost estimate (without considering network costs, for example, quite relevant for a data intensive app) would bring this setup to about $40/month. That is about 5 times more expensive than a VPC with 4x the resources.
OP said this is some sort of “enterprise self-hosting” solution, which I guess then kind of makes sense. For a company providing nextcloud as a service I would never vendor lock myself and let AWS take a huge chunk of my revenue forever, but I can imagine folks have different opinions.
In that case, Pulumi permissions are too broad IMHO for what it has to do, an enterprise should adhere to least privilege. Likewise, as I wrote in another comment, the egress security groups are unclear to me (why any traffic at all is needed?) and the image consumed should be pinned to a digest. Or better yet, should be coming from a private enterprise registry, ideally with an attestation that can be verified at runtime.
I am not sure ECS Fargate makes sense vs an ec2 instance to run the workload. This setup alone will cost about $30/month assuming half a vCPU per replica with Fargate, plus about $12 for the memory (1GB/task). 2xt2.micro could be run for ~$20 without even considering reservation discounts etc. Obviously the gap will become even larger at scale, which I suppose might be very interesting for an enterprise.
Plus, at this point why not using directly managed Nextcloud (or alternatives)… If anyway you use a managed storage, runtime and database, in a vendor lock…
Oh yeah, I am aware. Mostly here I would question the idea to have multi-AZ redundancy and using a manage service for DB (which indeed is expensive). All of this when a 5$ VPS could host the same (maybe still using s3 for storage) and accept the few hours downtime in the rare event your VPS explodes and you need to restore it from a backup.
So from my PoV this is absolutely overkill but I concede that it depends a lot on the requirements. I can’t ever imagine having requirements so tight that need such infra to run (in fact, I think not even most businesses have these requirements, I have written on the topic at https://loudwhisper.me/blog/hating-clouds/) for my personal stuff…
Everyone is free to pick their poison, but I have to ask…why? What is the target audience here? This is a massively overkill architecture IMHO. Not to talk about the fact you now need 3 managed services (fargate, s3 and aurora at least) for a single self hosted tool, and that is being generous (not counting cloudwatch, ALBs, etc.).
I agree with you on the principle. In this case I disagree with the premise. Years of actions I think easily out weight that tweet. If that’s the only reason to be suspicious, then I don’t think it’s warranted.
Thanks, I appreciate it.
Sure, it does. Which depending on what their goal is, may be perfectly fine.
They have always been actively almost exclusively on reddit (where they engage) anyway, they will keep doing so I assume.
The problem is that those arguments are not falsifiable. If not one, but two completely reasonable explanation cannot convince you of someone motivations, nothing can. However, I don’t care if Musk did or did not a Nazi salute. His actions speak much louder (in a bad sense) than the aesthetic that he decides to adopt. Proton donation pattern for example would be a strong indicator to measure intentions.
but it was a wildly tone deaf one if so
Maybe. But also maybe people are allowed to have different cultural references, and in a global context (i.e., the internet) we should expect diversity. I - for example - had never heard of this 88 thing, and I would definitely not think about it at all the next time I create a username, and I didn’t think it when I went to a barber shop that has that number in the name. Likewise, I wouldn’t call anybody writing “Merry Xmas” tone deaf for missing the reference to the X MAS of infamous history (and just recently in the news). For some people it’s apparently impossible to see their culture as non-universal (at the cost of sounding stereotypical, folks from US have particularly this problem after decades of cultural hegemony).
for a party that’s steeped in all of the same memetic game playing, you cant ignore the dog whistles
This all happened before Musk/Bannon salute. Just to specify it.
It’s not a problem of complexity, it’s a deliberate choice of not wanting to do that, because it is synthetic content disconnected from the community.
This comment is a perfect example of why I have written https://loudwhisper.me/blog/proton-fediverse-burnout/
The 88 thing is the complete tip of the iceberg for me. I can’t honestly imagine the thought process needed to reach a conclusion that a Taiwanese guy (8 is a lucky number) born in '88 would put that number as a dog-whistle (which is not really part of his own cultural landscape) for Nazis, while dealing with a PR issue.
It’s like looking at a crashed car, tire marks on the ground and suggesting it must have been a sharknado and not a car accident.
(Re)Posting and not engaging with the community is not free publicity, is bad publicity. They don’t have the resources (according to them) do to the latter, and therefore they choose not to do the former.
In case of proton free means “subsidized by paying users”. No big mystery on how they make money.
They specifically said they don’t want to do automated posting, to avoid writing and not interacting with the community. I see no value in them doing this, considering we can get the same content via RSS, blog page or email newsletter. Presence makes sense if it means presence. If it means a bot reposting content, anybody can do it, but the value is very low.
Comfort is the main reason, I suppose. If I mess up Wireguard config, even to debug the tunnel I need to go to the KVM console. It also means that if I go to a different place and I have to SSH into the box I can’t plug my Yubikey and SSH from there. It’s a rare occurrence, but still…
Ultimately I do understand both point of view. The thing is, SSH bots pose no threats after the bare minimum hardening for SSH has been done. The resource consumption is negligible, so it has no real impact.
To me the tradeoff is slight inconvenience vs slightly bigger attack surface (in case of CVEs). Ultimately everyone can decide which compromise is acceptable for them, but I would say that the choice is not really a big one.
Hey, the short answer is yes, you can.
I would elaborate a little more:
In practice I personally would choose a simple setup where the interesting logs are just forwarded (in Syslog format for example) to a single crowdsec instance. If you have ingress from a single node, I’d go for running it on the host and banning via firewall, if you have multiple ingress nodes, then I would run it inside the cluster and ban via a loadBalancer/cloud firewall/whatever you have in front.
In essence, I would spend some time to think about your preferences, and it might take a little bit to make the setup clean, but I think you have plenty of flexibility to do what you prefer. Let me know if you want to bounce some more ideas!
Yeah I know (I mentioned it myself in the post), but realistically there is no much you can do besides upgrading. Unattended upgrades kick in once a day and you will install the security patches ASAP. There are also virtual patches (crowdsec has a virtual patch for that CVE), but they might not be very effective.
I argue that VPN software is a smaller attack surface, but the problem still exists (CVEs) for everything you expose.
Yes, I was in this situation and I did exactly that. You need a splitter and then moca adapters in the rooms (a bit expensive at least 5-6 years ago where I lived).