Point of clarification: DAC is copper, AOC is fiber.
A lot of 10G equipment will support 5G/2.5G SFPs as well, so it can still be beneficial to go 10G on the core equipment.
Point of clarification: DAC is copper, AOC is fiber.
A lot of 10G equipment will support 5G/2.5G SFPs as well, so it can still be beneficial to go 10G on the core equipment.
There’s Finamp, a music client for Jellyfin with offline playback. I’ve not used it personally yet, but with Spotify ratcheting up prices again I’m in the process of switching to self-hosting my music library. When that’s up and running it’s at the top of my list for Android clients.
Subscribe
Ok that’s just not true at all.
Core temps ramp up astonishingly fast on RPi!
ducks
You should know that not all clients display your display name, some only show your username@instance.
It’s not apparent to everyone that your name is Onno.
There is no original thought.
A friend of mine had some explaining to do when he screwed up a dhcp config change and started routing his guest wifi through his “personal” pihole instead of the restricted guest one (he had family/children over often and did not want to be the reason nephew Timmy got an eyeful of wet bush or a beheading).
His family-friendly pihole was at holypi.lastname.local
and his private one was creampi.lastname.local
The other poster said it’s about convenience but that’s not really true. The claim to fame for NVMe drives is speed: While SATA SSDs can theoretically run at up to 500 MB/s, the latest NVMe drives can hit 7000+ MB/s.
It’s for this reason that you should pay attention to which NVMe drive you choose (if speed is what you’re after). SATA-based M.2 drives exist – and they run at SATA speeds – so if you see a cheap M.2 drive for sale it’s probably SATA and intended for bulk storage on laptops and SFF PCs without room for 2.5" drives. Double check the specs to be sure what you’re getting.
If you’re practicing 3-2-1 backups then you probably don’t need to bother with RAID.
I can hear the mechanical keyboards clacking; Hear me out: If you’re not committed to a regular backup strategy, RAID can be a good way to protect yourself against a sudden hard drive failure, at which point you can do an “oh shit” backup and reconsider your life choices. RAID does nothing else beyond that. If your data gets corrupted, the wrong bits will happily be synced to the mirror drives. If you get ransomwared, congratulations you now have two copies of your inaccessible encrypted data.
Skip the RAID and set up backups. It can be as simple as an external drive that you plug in once a week and run rsync, or you can pay for a service like backblaze that has a client to handle things, or you can set up a NAS that receives a nightly backup from your PC and then pushes a copy up to something like B2 or S3 glacier.
Most people set up a reverse proxy, yes, but it’s not strictly necessary. You could certainly change the port mapping to 8080:443
and expose the application port directly that way, but then you’d obviously have to jump through some extra hoops for certificates, etc.
Caddy is a great solution (and there’s even a container image for it 😉)
The great thing about containers is that you don’t have to understand the full scope of how they work in order to use them.
You can start with learning how to use docker-compose to get a set of applications running, and once you understand that (which is relatively easy) then go a layer deeper and learn how to customize a container, then how to build your own container from the ground up and/or containerize an application that doesn’t ship its own images.
But you don’t need to understand that stuff to make full use of them, just like you don’t need to understand how your distribution builds an rpm or deb package. You can stop whenever your curiosity runs out.
You don’t actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that’s it.
Example:
---
version: "2.1"
services:
adguardhome-sync:
image: lscr.io/linuxserver/adguardhome-sync:latest
container_name: adguardhome-sync
environment:
- CONFIGFILE=/config/adguardhome-sync.yaml
volumes:
- /path/to/my/configs/adguardhome-sync:/config
ports:
- 8080:8080
restart:
- unless-stopped
That’s it, you run docker-compose up
and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.
If anything, containers are less resource intensive than VMs.
Ultimately it’s a matter of personal choice and risk tolerance.
The Z1 will be simpler and have larger capacity, but if you have a drive fail you’ll need to quickly get it replaced or risk having to rebuild/restore if the mirror drive follows the first one to the grave.
Your Z2 setup right now can have two drives fail and still be online, and having a wider spread of power-on hours is usually a good thing in terms of failure probability.
I manage a large (14,000±) number of on-site RAID1 arrays in various environments and there is definitely a trend for drives shipped at the same time to fail at roughly the same time. It’s common enough that we often intentionally swap drives out before shipping a new unit to the customer site.
On my homelab, I’m much more tolerant of risk since I have trust in my 3-2-1 backup solution and if my NAS goes down it’s not going to substantially affect anything while I wait for a drive replacement.