

Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?


Please elaborate. How does it handle ssh keys? And what is fragile regarding corosync?


Yes, some chips (or rather parts of all chips) are spare on enterprise SSDs. You can even see how much is left via smart data
The loss did occur on simple ping commands, only on 2 out of 5 ports. The vendor confirmed the behavior to be faulty and took the switch back.
Maybe it was just a faulty model? However I do use multicast in my network (corosync) and a lot of 10G switches seem to have problems with that, maybe this was the case here, too.
The exact model is TRENDnet 5-Port 10G Switch, 5 x 10G RJ-45-Ports and there sure seem to be quite some people having issues as well…
I tried a 5port 10g trendnet switch some time ago, had weird speed issues and package losses. No good experience at all :(
Yeah, we pay a lot. We also got one of the lowest downtimes regarding electricity, on average approximately 10minutes per year…so that’s kind of a (small) advantage you get for the premium price
Average load 800W is 0.8kW24h30d=576kWh/M
Which is over 172€ on a 30ct/kWh contract.
Just my 2 cents:
Proxmox. Flexibility for both new services via VM/LXC and backups (just install proxmox backup server alongside and you get incremental backups with nice retention settings, file-restore capabilities as well as backup consistency checks)
If it’s in a VM/container you don’t need to worry about backups, see 1.
In this case isn’t it sufficient to be able to access the data via Windows network?
Jitsi Meet it’s usually p2p for calls between two persons. As soon as a third person joins, the meeting gets routed through the server. You can see this by a slight delay happening when person 3 joins. It won’t happen again for every additional person joining
Very interesting, thanks for sharing!
I know it’s just anecdotal evidence, however fail2ban in my one machine which does need ssh on port 22 to the open internet bans a lot of IPs every hour. All other ones with ssh on a higher port do not. Also their auth log does not show any failed attempts.
The points I made should not be used instead of all other security precautions like prohibited password login, fail2ban and updates, I thought that is common knowledge. It’s additional steps to increase security.
I disagree that changing the port is just security by obscurity. Scanning ips on port 22 is a lot easier than probing thousands of ports for every IP.
The reason people do automated exploit attempts on port 22 is because it is fast, cheap and effective. By changing the port you avoid these automated scans. I agree with you, this does not help if someone knows your IP and is targeting you specifically. But if you’re such a valuable target you hopefully have specialized people protecting your IT infrastructure.
Edit: as soon as your sshd answers on port 22, a potential attacker knows that the IP is currently in use and might try to penetrate. As stated above, this information would most likely not be shared with the automated attacks if you used any random port.
I can’t help much regarding the service denial issue.
However Port 22 should never be open to the outside world. Limiting to key authentication is a really good first step.
To avoid automated scans you should also change the port to a higher number, maybe something above 10,000.
This both saves traffic and CPU. And if a security bug in sshd exists this helps, too.
Thanks for your answer.
I use proxmox since version 2.1 in my home lab and since 2020 in production at work. We did not have issues with the ssh files yet. Also corosync is working fine although it shares its 10g network with ceph.
In all that time I was not aware of how the certs are handled, despite the fact I had two official proxmox trainings. Ouch.