

Podman inside Nixos inside LXC inside Proxmox
Auto updates configurable everywhere
Podman inside Nixos inside LXC inside Proxmox
Auto updates configurable everywhere
glhf
podman inside nixos inside lxc inside proxmox
The average Lemming has the nuance of a wrecking ball and the maturity of a junior high edge lord
another lemming that doesn’t want to perticipate in the far-left hate machine that lemmy has descended into
it’s the same reason that user numbers on lemmy are slowly but surely declining
hauntingly accurate
great, perfect imo
when you’re dealing with fundamentalists or extremists things turn nasty very quickly because you’re questioning their fantasy world
doesn’t matter if they’re good or bad faith, often they will just ban you
use a cheap vlan switch to make an actual vlan DMZ with the services’ router
use non-root containers everywhere. segment services in different containers
use nixos! you won’t regret it
indeed it is the clickbait, emotional guff
also lemmy is part of the problem
well the safe island instances can all access discussions on the free-for-all instances and we all exchange ideas and learn from eachother and get along famously
i just transitioned from a dedicated pfsense machine to openwrt LXC container in proxmox machine
the idea is to have 2 or more openwrt instances in different proxmox machines for some HA routing to my self hosted subnet(s)
going well so far and i think i know a lot more about routing (ha). openwrt is pretty great though.
ps. i think i’m having issues with udp port forwarding but not sure
i have found this reference very useful https://mynixos.com/options/
yeah proxmox is not necessary unless you need lots of separate instances to play around with
this is my container config for element/matrix
podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top
to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = false;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
environment.etc = {
"fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
[Definition]
failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
.*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
'');
};
services.fail2ban = {
enable = true;
maxretry = 3;
bantime = "10m";
bantime-increment = {
enable = true;
multipliers = "1 2 4 8 16 32 64";
maxtime = "168h";
overalljails = true;
};
jails = {
matrix-synapse.settings = {
filter = "matrix-synapse";
action = "%(known/action)s";
logpath = "/srv/logs/synapse.json.log";
backend = "auto";
findtime = 600;
bantime = 600;
maxretry = 2;
};
};
};
virtualisation.oci-containers = {
containers = {
postgres = {
autoStart = false;
environment = {
POSTGRES_USER = "XXXXXX";
POSTGRES_PASSWORD = "XXXXXX";
LANG = "en_US.utf8";
};
image = "docker.io/postgres:14";
ports = [ "5432:5432" ];
volumes = [
"/srv/postgres:/var/lib/postgresql/data"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
synapse = {
autoStart = false;
environment = {
LANG = "C.UTF-8";
# UID="0";
# GID="0";
};
# user = "1001:1000";
image = "ghcr.io/element-hq/synapse:latest";
ports = [ "8008:8008" ];
volumes = [
"/srv/synapse:/data"
];
log-driver = "json-file";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
"--pull=newer"
];
dependsOn = [ "postgres" ];
};
element = {
autoStart = true;
image = "docker.io/vectorim/element-web:latest";
ports = [ "8009:80" ];
volumes = [
"/srv/element/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
# dependsOn = [ "synapse" ];
};
call = {
autoStart = true;
image = "ghcr.io/element-hq/element-call:latest-ci";
ports = [ "8080:8080" ];
volumes = [
"/srv/call/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekit = {
autoStart = true;
image = "docker.io/livekit/livekit-server:latest";
ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
cmd = [ "--config" "/etc/config.yaml" ];
entrypoint = "/livekit-server";
volumes = [
"/srv/livekit:/etc"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekitjwt = {
autoStart = true;
image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
ports = [ "7980:8080" ];
environment = {
LK_JWT_PORT = "8080";
LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net";
LIVEKIT_KEY = "XXXXXX";
LIVEKIT_SECRET = "XXXXXX";
};
entrypoint = "/lk-jwt-service";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
};
};
}
this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = true;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
networking.firewall.allowedTCPPorts = [ 80 443 ];
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
security.acme = {
acceptTerms = true;
defaults.email = "XXXXXX@yahoo.com";
};
services.nginx = {
enable = true;
virtualHosts._ = {
default = true;
extraConfig = "return 500; server_tokens off;";
};
virtualHosts."XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/_matrix/federation/v1" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
locations."/" = {
extraConfig = "return 302 https://element.XXXXXX.dynu.net;";
};
extraConfig = "proxy_http_version 1.1;";
};
virtualHosts."matrix.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
extraConfig = "proxy_http_version 1.1;";
locations."/" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
};
virtualHosts."element.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8009/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."call.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8080/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."livekit.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/wss" = {
proxyPass = "http://192.168.10.131:7881/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
locations."/" = {
proxyPass = "http://192.168.10.131:7880/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
};
virtualHosts."livekit-jwt.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:7980/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."turn.XXXXXX.dynu.net" = {
enableACME = true;
http2 = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:5349/";
};
};
};
}
you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running
oh and if i make small changes to the services i just run sudo nixos-rebuild switch
and don’t reboot
i guess you were able to install the os ok? are you using proxmox or regular servers?
i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.
in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network
if you don’t need proxmox’s admin tools
try running podman in NixOS on ZFS