

I don’t think so, now. You’ll have to do those yourself.
I don’t think so, now. You’ll have to do those yourself.
Which means my distro-morphing idea should work in theory with OpenStack
I also don’t recommend doing a manual install though, as it’s extremely complex compared to automated deployment solutions like kolla-ansible (openstack in docker containers), openstack-ansible (host os/lxc containers), or openstack-helm/genestack/atmosphere (openstack on kubernetes). They make the install much more simpler and less time consuming, while still being intensely configurable.
Personally, I think Proxmox is somewhat unsecure too.
Proxmox is unique from other projects, in it’s much more hacky, and much of the stack is custom rather than standards. Like for example: For networking, they maintain a fork of the Linux’s older networking stack, called ifupdown2
, whereas similar projects, like openstack, or Incus, use either the standard Linux kernel networking, or a project called openvswitch
.
I think Proxmox is definitely secure enough, but I don’t know if I would really trust it for higher value usecases due to some of their stack being custom, rather than standard and mantained by the wider community.
If I end up wanting to run Proxmox, I’ll install Debian, distro-morph it to Kicksecure
If you’re interested in deploying a hypervisor on top of an existing operating system, I recommend looking into Incus or Openstack. They have packages/deployments than can be done on Debian or Red Hat distros, and I would argue that they are designed in a more secure manner (since they include multi tenancy) than Proxmox. In addition to that, they also use standard tooling for networking, like both can use Linux Bridge (in-kernel networking) for networking operations.
I would trust Openstack the most when it comes to security, because it is designed to be used as a public cloud, like having your own AWS, and it is deployed with components publicly accessible in the real world.
Again, this is distracting from the original argument to make some kind of tertiary argument unrelated to the original one: Is ssh secure to expose to the internet?
You said no. That is the argument being contested.
This is moving the goal posts. You went from “ssh is not fine to expose” to “VPN’s add security”. While the second is true, it’s not what was being argued.
Never expose your SSH port on the public web,
Linux was designed as a multi user system. My college, Cal State Northridge, has an ssh server you can connect to, and put your site up. Many colleges continue to have a similar setup, and by putting stuff in your homedir you can have a website at no cost.
There are plenty of usecases which involve exposing ssh to the public internet.
And when it comes to raw vulnerabilities, ssh has had vastly less than stuff like apache httpd, which powers wordpress sites everywhere but has had so many path traversal and RCE vulns over the years.
Firstly, Xen is considered by secure by Qubes — but that’s mainly the security of the hypervisor and virtualization system itself. They make a very compelling argument that escaping a Xen based virtual machine is going to be more difficult than a KVM virtual machine.
But threat model matters a lot. Qubes aims to be the most secure OS ever, for use cases like high profile journalists or other people who absolutely need security, because they will literally get killed without it.
Amazon moved to KVM because, despite the security trade off’s, it’s “good enough” for their usecase, and KVM is easier to manage because it’s in the Linux kernel itself, meaning you get it if you install Linux on a machine.
In addition to that, security is about more than just the hypervisor. You noted that Promox is Debian, and XCP-NG is Centos or a RHEL rebuild similar to Rocky/Alma, I think. I’ll get to this later.
Xen (and by extension XCP-NG) was better known for security whilst KVM (and thus Proxmox)
I did some research on this, and was planning to make a blogpost and never got around to making it. But I still have the draft saved.
Name | Summary | Full Article | Notes |
---|---|---|---|
Performance Evaluation and Comparison of Hypervisors in a Multi-Cloud Environment | Compares WSL (kind of Hyper-V), VirtualBox, and VMWare-Workstation. | springer.com, html | Not honest comparison, since WSL is likely using inferior drivers for filesystem access, to promote integration with host. |
Performance Overhead Among Three Hypervisors: An Experimental Study using Hadoop Benchmarks | Compares Xen, KVM, and an unnamed commercial hypervisor, simply referred to as CVM. | ||
Hypervisors Comparison and Their Performance Testing (2018) | Compares Hyper-V, XenServer, and vSphere | springer.com, html | |
Performance comparison between hypervisor- and container-based virtualizations for cloud users (2017) | Compares xen, native, and docker. Docker and native have neglible performance differences. | ieee, html | |
Hypervisors vs. Lightweight Virtualization: A Performance Comparison (2015) | Docker vs LXC vs Native vs KVM. Containers have near identical performance, KVM is only slightly slower. | ieee, html | |
A component-based performance comparison of four hypervisors (2015) | Hyper-V vs KVM vs vSphere vs XEN. | ieee, html | |
Virtualization Costs: Benchmarking Containers and Virtual Machines Against Bare-Metal (2021) | VMWare workstation vs KVM vs XEn | springer, html | Most rigorous and in depth on the list. Workstation, not esxi is tested. |
The short version is: it depends, and they can fluctuate slightly on certain tasks, but they are mostly the same in performance.
default PROXMOX and XCP-NG installations.
What do you mean by hardening? If you are talking about hardening the management operating system (Proxmox’s Debian or XCP’s RHEL-like), or the hypervisor itself?
I agree with the other poster about CIS hardening and generally hardening the base operating system used. But I will note that XCP-NG is more designed to be an “appliance” and you’re not really supposed to touch it. I wouldn’t be suprised if it’s immutable nowadays.
For the hypervisor itself, it depends on how secure you want things, but I’ve heard that at Microsoft Azure datacenters, they disable hyperthreading because it becomes a security risk. In fact, Spectre/Meltdown can be mitigated by disabling hyper threading. Of course, their are other ways to mitigate those two vulnerabilities, but by disabling hyper threading, you can eliminate that entire class of vulnerabilities — at the cost of performance.
Openstack cluster!
If you have an older nvidia gpu, you can use vgpu unlock to unlock these features on that.
Freshtomato is not out of date. The last stable release was december of 2024 And the github repos are being actively updated as well.
Perhaps you are confusing freshtomato with some of it’s predecessors, like tomato or advancedtomato, which are no longer currently maintained.
As for openwrt instead, that doesn’t support broadcom wifi chips, whereas freshtomato does.
This is like that other recommendation of a linuxserver/kasmvnc docker image as well. It doesn’t allow for collaborative editing like cryptpad or google docs does.
I already replied to your last post, but my reply here is the same. You want kubernetes and gitops. There exists many ways to do staging/preprod/prod setups with gitops.
I’m gonna be real: You want kubernetes + gitops (either fluxcd or argocd or the rancher one).
I mean sure, jenkins works, but nothing is going to be as smooth as kubernetes. I originally attempted to use ansible as many people suggested, but I got frustrated becuase it struggled to manage state in a truly declarative way (e.g. when I would change the ports in the ansible files the podman containers wouldn’t update, I had to add tasks for destroying and recreating the containers).
I eventually just switched to kubernetes + fluxcd. I push to the git repo. The state of the kubernetes cluster changes according. Beautiful. Simple. Encrypted secrets via sops. It supports the helm package manager as well. Complex af to set up though. But it’s a huge time saver in the long run, which is why so many companies use it.
Not much, probably. For small scale usecase, like a VPS, AWS is horrifically expensive. For a 4GB of ram VPS, AWS is 30 USD a month, whereas you can get that for 10 USD a month, elsewhere.
AWS does this because of vender lock in. For the few times when a consumer of theirs needs a VPS (or some other service cheaper elsewhere, it’s less effortt to continue to use AWS than to go someplace else.
But for individuals and small organizations, like the fediverse servers, we can just start out on the cheaper options.
Decentralized in theory, but not in practice is just centralized.
Also:
So how challenging is it to run those? In July 2024, running a Relay on ATProto already required 1 terabyte of storage. But more alarmingly, just a four months later in November 2024, running a relay now requires approximately 5 terabytes of storage
It could be an old service on that same ip. Zoomeye/shodan don’t rescan on the spot, they keep records of old scans.
Similar site as shodan, but different company. I’d recommend checking there as well.
Debian already has docker packaged. That’s more convenient.
Debian with the docker convenience script.
They seem to be moving away from this, and it’s not longer the first option on their install page
On their debian page
Use a convenience script. Only recommended for testing and development environments
Also, it should be noted about the first option they recommend, Docker Desktop, that Docker Desktop is proprietary.
I recommend just getting the docker.io
and docker-compose
from debian’s repositories.
No, I think if you’re using the nextcloud all in one image, then the management image connects to the docker socket and deploys nextcloud using that. The you could be able to update nextcloud via the web ui.
https://github.com/nextcloud/all-in-one?tab=readme-ov-file#how-to-update-the-containers
I thought you were going to link to this.