yeah proxmox is not necessary unless you need lots of separate instances to play around with
yeah proxmox is not necessary unless you need lots of separate instances to play around with
this is my container config for element/matrix
podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top
to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = false;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
environment.etc = {
"fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
[Definition]
failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
.*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
'');
};
services.fail2ban = {
enable = true;
maxretry = 3;
bantime = "10m";
bantime-increment = {
enable = true;
multipliers = "1 2 4 8 16 32 64";
maxtime = "168h";
overalljails = true;
};
jails = {
matrix-synapse.settings = {
filter = "matrix-synapse";
action = "%(known/action)s";
logpath = "/srv/logs/synapse.json.log";
backend = "auto";
findtime = 600;
bantime = 600;
maxretry = 2;
};
};
};
virtualisation.oci-containers = {
containers = {
postgres = {
autoStart = false;
environment = {
POSTGRES_USER = "XXXXXX";
POSTGRES_PASSWORD = "XXXXXX";
LANG = "en_US.utf8";
};
image = "docker.io/postgres:14";
ports = [ "5432:5432" ];
volumes = [
"/srv/postgres:/var/lib/postgresql/data"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
synapse = {
autoStart = false;
environment = {
LANG = "C.UTF-8";
# UID="0";
# GID="0";
};
# user = "1001:1000";
image = "ghcr.io/element-hq/synapse:latest";
ports = [ "8008:8008" ];
volumes = [
"/srv/synapse:/data"
];
log-driver = "json-file";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
"--pull=newer"
];
dependsOn = [ "postgres" ];
};
element = {
autoStart = true;
image = "docker.io/vectorim/element-web:latest";
ports = [ "8009:80" ];
volumes = [
"/srv/element/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
# dependsOn = [ "synapse" ];
};
call = {
autoStart = true;
image = "ghcr.io/element-hq/element-call:latest-ci";
ports = [ "8080:8080" ];
volumes = [
"/srv/call/config.json:/app/config.json"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekit = {
autoStart = true;
image = "docker.io/livekit/livekit-server:latest";
ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
cmd = [ "--config" "/etc/config.yaml" ];
entrypoint = "/livekit-server";
volumes = [
"/srv/livekit:/etc"
];
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
livekitjwt = {
autoStart = true;
image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
ports = [ "7980:8080" ];
environment = {
LK_JWT_PORT = "8080";
LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net";
LIVEKIT_KEY = "XXXXXX";
LIVEKIT_SECRET = "XXXXXX";
};
entrypoint = "/lk-jwt-service";
extraOptions = [
"--label" "io.containers.autoupdate=registry"
"--pull=newer"
];
};
};
};
}
this is my nginx config for my element/matrix services
as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments
{ pkgs, modulesPath, ... }:
{
imports = [
(modulesPath + "/virtualisation/proxmox-lxc.nix")
];
security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];
system.stateVersion = "23.11";
system.autoUpgrade.enable = true;
system.autoUpgrade.allowReboot = true;
nix.gc = {
automatic = true;
dates = "weekly";
options = "--delete-older-than 14d";
};
networking.firewall.allowedTCPPorts = [ 80 443 ];
services.openssh = {
enable = true;
settings.PasswordAuthentication = true;
};
users.users.XXXXXX = {
isNormalUser = true;
home = "/home/XXXXXX";
extraGroups = [ "wheel" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
security.acme = {
acceptTerms = true;
defaults.email = "XXXXXX@yahoo.com";
};
services.nginx = {
enable = true;
virtualHosts._ = {
default = true;
extraConfig = "return 500; server_tokens off;";
};
virtualHosts."XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/_matrix/federation/v1" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
locations."/" = {
extraConfig = "return 302 https://element.XXXXXX.dynu.net;";
};
extraConfig = "proxy_http_version 1.1;";
};
virtualHosts."matrix.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
extraConfig = "proxy_http_version 1.1;";
locations."/" = {
proxyPass = "http://192.168.10.131:8008";
extraConfig = "client_max_body_size 300M;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header X-Forwarded-Proto $scheme;";
};
};
virtualHosts."element.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8009/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."call.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:8080/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."livekit.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/wss" = {
proxyPass = "http://192.168.10.131:7881/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
locations."/" = {
proxyPass = "http://192.168.10.131:7880/";
# proxyWebsockets = true;
extraConfig = "proxy_http_version 1.1;" +
"proxy_set_header X-Forwarded-For $remote_addr;" +
"proxy_set_header Host $host;" +
"proxy_set_header Connection \"upgrade\";" +
"proxy_set_header Upgrade $http_upgrade;";
};
};
virtualHosts."livekit-jwt.XXXXXX.dynu.net" = {
enableACME = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:7980/";
extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
};
};
virtualHosts."turn.XXXXXX.dynu.net" = {
enableACME = true;
http2 = true;
addSSL = true;
locations."/" = {
proxyPass = "http://192.168.10.131:5349/";
};
};
};
}
you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running
oh and if i make small changes to the services i just run sudo nixos-rebuild switch
and don’t reboot
i guess you were able to install the os ok? are you using proxmox or regular servers?
i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.
in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network
nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well
Leptos is very React-like but it’s Rust so it will be Rusty sometimes. you mentioned you’ve done the back end in Rust anyway.
with LLMs do you mean code suggestions like code-pilot or actually integrating with an LLM api?
do the front end in Leptos and host it on ForgeJo
ach ja!
usenet was Lemmy without the web UI
oh it did? i missed that scrolling through the results somehow, thanks!
no matrix instances!!??
i wish Lemmy would embrace Mastodon and make it easy for Lemmy users to join that network
yes, i have a few Rust framework based sites for mostly personal use
if your service has to be public i would recommend getting a switch that can do VLANs and put your server inside it’s own VLAN DMZ so if you get hacked they will be trapped inside the VLAN
I mean pingora out performs nginx which is why cloud flare made it, I believe
Would that lack the performance benefits that pingora provides by being compiled without configuration file?
Yep it would need to be compiled from the configuration given. I’m vaguely interested in trying. I will look up the rust builders. Thank you
Yeah I love that about nix and I can imagine a clever package writer can make a pingora binary to mimic that configurabllity
i have found this reference very useful https://mynixos.com/options/