NIxOS is great for servers
NIxOS is great for servers
nixos is so the solution to this
suprised it’s so far down the thread
if you don’t need proxmox’s admin tools
try running podman in NixOS on ZFS


Podman inside Nixos inside LXC inside Proxmox
Auto updates configurable everywhere




glhf


podman inside nixos inside lxc inside proxmox
The average Lemming has the nuance of a wrecking ball and the maturity of a junior high edge lord
another lemming that doesn’t want to perticipate in the far-left hate machine that lemmy has descended into
it’s the same reason that user numbers on lemmy are slowly but surely declining
hauntingly accurate
great, perfect imo


when you’re dealing with fundamentalists or extremists things turn nasty very quickly because you’re questioning their fantasy world
doesn’t matter if they’re good or bad faith, often they will just ban you


use a cheap vlan switch to make an actual vlan DMZ with the services’ router
use non-root containers everywhere. segment services in different containers


use nixos! you won’t regret it


indeed it is the clickbait, emotional guff
also lemmy is part of the problem


well the safe island instances can all access discussions on the free-for-all instances and we all exchange ideas and learn from eachother and get along famously


i just transitioned from a dedicated pfsense machine to openwrt LXC container in proxmox machine
the idea is to have 2 or more openwrt instances in different proxmox machines for some HA routing to my self hosted subnet(s)
going well so far and i think i know a lot more about routing (ha). openwrt is pretty great though.
ps. i think i’m having issues with udp port forwarding but not sure


i have found this reference very useful https://mynixos.com/options/


yeah proxmox is not necessary unless you need lots of separate instances to play around with


this is my container config for element/matrix
podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = false;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
environment.etc = {
"fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
[Definition]
failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
.*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
'');
};
services.fail2ban = {
enable = true;
maxretry = 3;
bantime = "10m";
bantime-increment = {
enable = true;
multipliers = "1 2 4 8 16 32 64";
maxtime = "168h";
overalljails = true;
};
jails = {
matrix-synapse.settings = {
filter = "matrix-synapse";
action = "%(known/action)s";
logpath = "/srv/logs/synapse.json.log";
backend = "auto";
findtime = 600;
bantime = 600;
maxretry = 2;
};
};
};
virtualisation.oci-containers = {
containers = {
postgres = {
autoStart = false;
environment = {
POSTGRES_USER = "XXXXXX";
POSTGRES_PASSWORD = "XXXXXX";
LANG = "en_US.utf8";
};
image = "docker.io/postgres:14";
ports = [ "5432:5432" ];
volumes = [
"/srv/postgres:/var/lib/postgresql/data"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
synapse = {
autoStart = false;
environment = {
LANG = "C.UTF-8";
# UID="0";
# GID="0";
};
# user = "1001:1000";
image = "ghcr.io/element-hq/synapse:latest";
ports = [ "8008:8008" ];
volumes = [
"/srv/synapse:/data"
];
log-driver = "json-file";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
"--pull=newer"
];
dependsOn = [ "postgres" ];
};
element = {
autoStart = true;
image = "docker.io/vectorim/element-web:latest";
ports = [ "8009:80" ];
volumes = [
"/srv/element/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
# dependsOn = [ "synapse" ];
};
call = {
autoStart = true;
image = "ghcr.io/element-hq/element-call:latest-ci";
ports = [ "8080:8080" ];
volumes = [
"/srv/call/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekit = {
autoStart = true;
image = "docker.io/livekit/livekit-server:latest";
ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
cmd = [ "--config" "/etc/config.yaml" ];
entrypoint = "/livekit-server";
volumes = [
"/srv/livekit:/etc"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekitjwt = {
autoStart = true;
image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
ports = [ "7980:8080" ];
environment = {
LK_JWT_PORT = "8080";
LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net";
LIVEKIT_KEY = "XXXXXX";
LIVEKIT_SECRET = "XXXXXX";
};
entrypoint = "/lk-jwt-service";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
};
};
}
my nixos containers and the podman containers inside them update nightly around 03:00