

Sounds like a job for a pair of second hand nanobeams or something similar.
I second the other commenter who suggested using WISP gear. If you have clear fresnel zones it should work a treat.


Sounds like a job for a pair of second hand nanobeams or something similar.
I second the other commenter who suggested using WISP gear. If you have clear fresnel zones it should work a treat.


I second this. Gluetun makes it so easy, working with docker’s internal networking is such a pain.


Luckily they are on 2.0.1 now so there has been 2 stable version by now


Is external libraries maybe what you’re looking for?
There’s already an issue open for it: https://github.com/immich-app/immich/issues/1713


If you search for pfsense alias script, you’ll find some examples on updating aliases from a script, so you’ll only need to write the part that gets the hostnames. Since it sounds like the hostnames are unpredictable, it might be hard as the only way to get them on the fly is to listen for what hostnames are being resolved by clients on the LAN, probably by hooking into unbound or whatever. If you can share what the service is it would make it easier to determine if there’s a shortcut, like the example I gave where all the subdomains are always in the same CIDR and if one of the hostnames is predictable (or if the subdomains are always in the same CIDR as the main domain for example, then you can have the script just look up the main domain’s cidr). Another possibly easier alternative would be to find an API that lets you search the certificate transparency logs for the main domain which would reveal all subdomains that have SSL certificates. You could then just load all those subdomains into the alias and let pfsense look up the IPs.
I would investigate whether the IPs of each subdomain follow a pattern of a particular CIDR or unique ASN because reacting to DNS lookups in realtime will probably mean some lag between first request and the routing being updated, compared to a solution that’s able to proactively route all relevant CIDRs or all CIDRs assigned to an ASN.


I think the way people do it is by making a script that gets the hostnames and updates the alias, then just schedule it in pfsense. I’ve also seen ASN based routing using a script, but that’ll only work on large services that use their own AS. If the service is large enough, they might predictably use IPs from the same CIDR, so if you spend some time collecting the relevant IPs, you might find that even when the hostnames are new and random, they always go to the same pool of IPs, that’s the lazy way I did selective routing to GitHub since it was always the same subnet.
That’s what I do. 1.6TB currently on rsync.net, only my personal artifacts excluding all media that can be reacquired and it’s a reasonable $10/mo. Synced daily at 4am.
If I wanted my backups to include my media collection or anything exceeding several TB, I would build a second NAS and drop it at my parents’.


My homelab has been mostly on autopilot for a while. Synology 6 bay running most lighter weight docker stuff (arrstack, immich, etc) and an Intel nuc running heavy stuff (quicksync transcodes for Plex+jf, ollama). Both connected to digitalocean via WG for reverse proxy due to CGNAT.
I had my router SSD either die or get corrupted this past week, haven’t looked much at the old SSD besides trying to extract the config off of it. I ended up just fresh installing opnsense because I didnt have any recent backups (my Synology and nuc back up to rsync.net, but I haven’t gotten around to automated backups for my router since it’s basically a plain config, and my cloud reverse proxy which is just a basic docker compose + small haproxy config). Luckily my homelab reaching out to the cloud reverse proxy means there’s basically no important config on my router anymore, they just need DHCP and a connection.
Besides that the arrstack just chugs along on its own.
I recently figured out I can load jellyfin playback URLs into vrchat video players, either direct stream or through the transcoding pipeline as an m3u8 that live transcodes based on the url parameters you set. This is great because the way watch parties in VRChat works is that everyone in an instance loads the same URL pasted into media players and syncs the playback. That means you need to have a publicly accessible url (preferably with a token of some sort) that can be loaded by an arbitrary number of unique IP addresses simultaneously, which I don’t think is doable with Plex.
I’m now working on a little web app to let me log into Jellyfin, search/browse media, and generate the links with arbitrary or pre-set transcode settings for easy copy/pasting into VRChat. The reason it’s needed is that Jellyfin only provides the original file without transcoding when you use the “copy stream” option, so I believe the only way to get a transcoded stream url currently is to set the web interface to specific settings and grab the URL from the network. But that doesn’t let you set arbitrary stuff like codecs and subtitle burn in and overriding what it thinks you support. So a simple app to construct the URL will make VRChat watch parties a lot easier.


Fwiw Anubis is adding a nojs meta refresh challenge that if it doesn’t have issues will soon be the new default challenge
deleted by creator
Imo that’s perfectly fine and not idiotic if you have a static IP, no ISP blocked ports / don’t care about using alt ports, and don’t mind people who find your domain knowing your IP.
I did basically that when I had a fiber line but then I added a local haproxy in front to handle additional subdomains. I feel like people gravitate towards recommending that because it works regardless of the answers to the other questions, even their security tolerance if recommending access only over VPN.
This is 99% my setup, just with a traefik container attached to my wifeguard container.
Can recommend especially because I can move apartments any time, not care about CGNAT (my current situation which I predicted would be the case), and easily switch to any backup by sticking my boxes on any network with DHCP that can reach the Internet (like a 4G hotspot or a nanobeam pointed at a public wifi down the road) in a pinch without reconfiguring anything.


Is there any way to connect the bsky android app to the atproto.africa relay or a third party appview that uses the atproto.africa relay? I wouldn’t mind using bsky more if there was a clone of the android app that doesn’t use the bsky relay/appview. Looking at whtwnd it appears to be just web and not native apps?
I would like to host my own PDS and access bsky through a native app using third party relay+appview, but I haven’t seen a way to do this yet.


What model size did you run on your laptop? I have an Intel Nuc with an i7 and I run various models on CPU (it doesn’t have a dedicated GPU) and while I can’t run stuff larger than ~14b or so, models up to around ~7b aren’t too slow. If I try to run a 32b then I get a similar experience to you. I tend not to go below 4b because that’s when it starts being dumb and not following instructions well, so just depends on how complex your task is.
Immich is pretty good for this if you take pictures at each location. It has a global map that shows all your photos with a heatmap-style display and a drawer that shows a grid of the photos within your viewport as you can and zoom around. It doesn’t seem like you can view a specific album on the map currently but you can at least filter the map to favorites or a date range.


I use a .dev and it just works with letsencrypt. I don’t do anything special with wildcards, I just let traefik request a cert for every subdomain I use and it works. I use the tls challenge which works on port 443, so I don’t think HSTS or port 80 matters, but I still forwarded port 80 it so I can serve an http->https redirect since stuff like curl and probably other tools might not know about HSTS.
Gotcha thanks for the info! It looks like I would be fine with ocis or opencloud, but since my main use case and pain points are with document editing which is collabora, it probably wouldn’t change much besides simplifying the docker setup (I had to make a gross pile of nginx config stuff pieced together from many forum help posts to get the nextcloud fpm container to work smoothly). But it already works so unless it breaks there’s little incentive for me to change.
Ah I see, I guess at least that would help with the main UI, but I’m already using collabora through the collabora code server in next cloud so it sounds like I’ll probably have the same document editing experience with OCIS/opencloud. I used to use onlyoffice but after I tried out their mobile app, it started blocking me from editing documents using the next cloud app (which seemed to use the only office web UI) so I was forced to switch unless I started paying for onlyoffice.
I use gluetun to connect specific docker containers to a VPN without interfering with other networking, since it’s all self contained. It also has lots of providers built in which is convenient so you can just set the provider, your password, and your preferred region instead of needing to manually enter connection details manage lists of servers (it automatically updates it’s own cached server list from your provider, through the VPN connection itself)
Another nice feature is that it supports scripts for port forwarding, which works out of the box for some providers. So it can automatically get the forwarded port and then execute a custom script to set that port in your torrent client, soulseek, or whatever.
I could just use a wireguard or openvpn container, but this also makes it easy to hop between vpn providers just by swapping the connection details regardless of whether the providers only support wg or openvpn. Just makes it a little more universal.