

I think NetworkChuck has a good set of tutorial videos about self hosting. For the most part you can search for what you want to find info on and he probably had a video on it. E.g. Nginx: https://m.youtube.com/@NetworkChuck/search?query=Nginx


I think NetworkChuck has a good set of tutorial videos about self hosting. For the most part you can search for what you want to find info on and he probably had a video on it. E.g. Nginx: https://m.youtube.com/@NetworkChuck/search?query=Nginx


I think if you didn’t assign a tag on the Release Profile it applies to all series.


I have never done RAID over USB, but have done various JBOD setups using SCSI. I think the general idea is that USB having such an easily disconnected connector plus the latency overhead on translating SATA to USB to SATA again means you have a higher chance of corruption. SCSI setups typically have connectors with locking mechanisms to prevent easy disconnection.
If eSATA is an option it might be better for the performance and it has a latching mechanism to prevent easy disconnection. You can get a 2-port eSATA PCI card for about 50 bucks.
Oh, and if you have a free PCI port, you could add internal SATA ports to mount the drives internally.


I know tailscale prefers being installed on every machine but not all of my machines are even capable of running custom code. I use a single tailscale router that published my internal network to tailscale and if the internet is down everything still works fine internally.


With TrueNas you can do it two ways: ISCSI disks that are mounted to the VMs or via NFS. With ISCSI you won’t have access to the data from the TrueNas side as the data will be stored as a volume file. With NFS you get the best of both worlds as you’ll be able to access the files via other TrueNas services like SMB/SFTP. I have my Jellyfin/Plex running via NFS and have few issues, though I’ve not tested it with large 4k/8k videos yet. I mostly run 1080p.


No wildcard support sigh
+1 for Backblaze. They have a convenient backup software too that works great. I backup my parents laptop using it, and use their S3 storage for my NAS backups.


This would depend on whether the limit is defined as ingress or egress or both. For example AWS has free ingress traffic from the internet but there is a cost for egress traffic to the internet.
A better solution would be to find a unmetered service, which means that you have a fixed transfer speed (e.g. 500 Mbit) but have unlimited bandwidth. OVH offers this in their VPS products.


Sadly, most of the ones I’ve found are too complicated, and getting all devices to accept the CA is more hassle than it’s worth for self hosting. I’ve given up and just buy my wildcard cert for 60$/yr and just put it on everything.


The DNS-01 challenge can be used to generate a wildcard by creating the requested dns record in your public dns zone, then you can use that cert for internal servers/dns. With certain dns providers it can even be automated.
https://eff-certbot.readthedocs.io/en/stable/using.html#third-party-plugins


While this is a great writeup on Lemmy instances, the thread was specifically about Mastodon and it’s numerous forks. I believe they use the same tech but are vastly different things. The instance I found wasn’t quite Mastodon apparently, even though it works very similar and the app designed to connect to a Mastodon instance wouldn’t connect to it.


I’ve been looking for a new instance to join due to various reasons. Ended up setting up and account somewhere and spending 2 hours manually copying over various settings only to find my Moshidon client won’t even connect with that new instance. Normal people are just going to quit when that happens.
You mentioned ping. If you’re using Termux you may need to manually update its DNS settings (different from the system DNS). The file is /data/data/com.termux/files/usr/etc/resolv.conf
To make it roam you probably want your home dns first then some internet resolvers after that.
In days past some drive vendors had different sector layouts for drives and would cause issues with raid. Pretty sure most nowadays are all the same layout and you won’t run into any issues. I still look to get the same drive model anyways just to be perfectly sure that there are no issues.
Even then you may run into weird issues like one of my 1.2 TB enterprise ssd drives was reporting 1.12 TiB rather than 1.09 TiB the other 7 drives had. TrueNas refused to build a vdev with that drive and I had to return it to get a new one.


Typically a Fiber ISP will run Fiber optics only to your DEMARC (or Demarcation) point. This will be usually where your main cable (before any splits) or DSL line used to come in (in the US they’ve been using Orange tubes to indicate this and it will usually run to a panel in some closet or laundry). At the DEMARC they’ll install one of two things: a basic fiber to ethernet converter which will provide you a single ethernet port and a pure tap to the internet, or a Gateway device that will convert the fiber to multiple eithernet with NAT (usually providing other capabilites like TV, Phone, etc).
If you have the latter, you may not get much say in what you can do with your connection, and would be limited to a DMZ mode that is configured on the Gateway. What you put behind the converter or gateway is up to you.
I’ve got my mom setup on their PC backup service, no complaints so far (on the Backblaze side that is, she still insists that she doesn’t need continuous backups even though I’ve had to restore multiple times for her).
I switched my backups from Crashplan to B2 as it was significantly cheaper than going to AWS. B2 is more expensive than what I was paying for Crashplan Pro Unlimited (about 8x for the amount of data I have), but I have more peace of mind with it not relying on Crashplan’s terrible Java client.
A reminder that the only good backup is a tested backup.


Yes, ULA are one of the exceptions I mentioned. It covers fc00::/7 which is fc00 to fdff, though I believe most use just the top half. I use one for an intermediate network between my edge router and my primary firewall to not consume one of my limited /64 networks.
I haven’t played with IPV6 NAT much. I know its use is a bit discouraged as NAT was always designed as a stopgap measure for IPV4 exhaustion. It might be a good option if you need additional space and your ISP doesn’t support additional prefixes. Just keep in mind that if you use these in DNS, they won’t be accessible externally.


Its a bit complicated and depends on your ISPs support level.
If your ISP supports basic IPv6 they will likely use SLAAC or DHCPv6 to advertise the /64 that any directly connected devices, like your router, can use (/64 being the default size for a single LAN segment, even between point-to-point connections). If you have devices behind that router that want to use IPv6, you will need additional prefixes. The most common method nowadays is to use Prefix Delegation (DHCPv6-PD) where your router will ask the upstream router for an additional routeable prefix which you will use on another interface of the router. The RFC for prefix delegation recommends a /48, but many ISPs are not delegating that much. I only get half of a /60 from my ISP’s modem.
If the ISP just provides you a static routeable prefix, then you would just assign that to your router’s interface and enable SLAAC/DHCPv6 to give out that prefix. This would only need to be configured in a single device and is why they don’t recommend hard coding servers and workstations with IPV6 addresses.
Keep in mind that your router will also need a firewall as all of these IPv6 prefixes are routeable and public. While IPV6 space is quite like finding a needle in a haystack, you could still find yourself having a bad day if you treat it like private IPV4 space.
The end result though is that you would setup DNS so that devices register their IPv6 addresses and it just works. There’s also the MDNS protocol that supports IPv6 which will do segment-local resolution for device names.


On one hand you definitely don’t want to be assigning manual/static IPv6 to all your devices because if your prefix ever changes you’ll have to update it everywhere. IPv6 doesn’t really have a concept of private address space (with a few exceptions). On the other hand most modern IPv6 stacks support dynamic protocols like SLAAC while also assigning a static suffix to the published prefix (e.g. You want :0:0:1234:1 to go to your server, and SLAAC gets the prefix 200x::5678/64 your server would assign itself 200x::5678:0:0:1234:1).
DHCPv6 fixes a lot of these headaches for managed networks by allowing you to reserve specific IPv6 for a given DUID.
IMO, your network, do what you want. I have two jump Raspberry PIs that I have static suffixes so I always know where they are without relying on DNS or whatever. Edit: I apparently misremembered how I had these setup. I use a custom interface up script to take the SLAAC prefix and append the custom suffix to it as a secondary IP.
Instead of a default gateway you can configure just your VPN IP address to go to your gateway. You might also need DNS servers depending on your setup.
Example: ip route add 1.1.1.1/32 via 192.168.1.1 dev eth0
Note that without a script this may be flaky if you’re using DNS to resolve the VPN. It might be better to have a script that resolves the IP(s) of the VPN and then adds routes.
That being said, your VPN software is usually designed to install routes that have higher priority so that they will get used before the local network. One such way is by adding half-internet routes (0.0.0.0/1 and 128.0.0.0/1) which get preferred over the larger default route. If you run ip route once connected you may see those routes present.
While I’m not sure if it works in rootless, take a look at binhex/arch-delugevpn project which has scripts to set up a similar network isolation environment.