

I’m not scrutinizing it much.
Same. I just run a Minecraft server for my kid and his friends and a static HTML blog, so I’m ok with it.
I’m fairly sure it’s a background migration task, and I have a feeling it depends on your region.


I’m not scrutinizing it much.
Same. I just run a Minecraft server for my kid and his friends and a static HTML blog, so I’m ok with it.
I’m fairly sure it’s a background migration task, and I have a feeling it depends on your region.


I haven’t had my instances deleted, but they do some kind of maintenance blip everyday that my monitoring sees as 3 seconds of downtime, so maybe keep that in mind.


Not enough info, but it sounds almost like you’re creating the snapshots locally and sending those over instead of snapshotting to the destination directly.
Sanoid and syncoid are Jim Salter’s creation. Check out his blog at mercenarysysadmin.com for some examples of sanoid and syncoid. Klara systems also has a number of deep dives into those utilities.


Love the enthusiasm, but let’s stop casting this as an end-user-only problem. The real issue is, once again, large corporations using and taking advantage of oss while putting ZERO money or work back into oss. It’s victim blaming with extra steps, and us blaming each other is exactly what the real culprits want.
If it makes us feel better that we can pay on a regulsr basis for these things, great. But massive oss projects can’t thrive on a few of us donating.
You should check out the nas compares review of the pre-release, it’s insanely expensive and he questions who exactly is the target audience.
Beyond that, he reviews the specs quite nicely (as usual).
Frigate is popular.
I used to use ZoneMinder, it worked well, but you must be very familiar with onvif, primary/secondary channels, and key frames for it to work well.
I only switched to frigate because of the person/animal detection. It’s ok, but it does need some polish in a few areas like event retention, and it could stand some more approachable documentation.


Pangolin is a reverse proxy implementation, so it doesn’t really achieve the same thing as VPN software.


Kodak said “we don’t believe digital photography will take over” and iRobot is like “we’ve tried nothing and we’re all out of ideas”
I didn’t know Jellyfin could search torrents. Do you mean radarr/sonarr?


Sorry, I bungled that by not adding context.
You’re right, subsonic server and its api are the source of all this. However, that api is completely open, which means there are many, many client and server applications that use it successfully.
Navidrome is a good server one, tempus, and here are a ton more!


Close, it’s this: https://subsonic.org/pages/api.jsp


When I ran Nextcloud, it broke every other update. Mostly because NC didn’t seem to care that anyone had a 7-year-old install being migrated along.


I wish someone would jailbreak the Google home and Chromecast devicea so we don’t have to throw them away in a year when Google abandons them.


Portability is not really an aspect one needs to consider when it comes to a NAS
Hard disagree, and it is one of the best things about ZFS. You can plunk a ZFS pool on another system and be almost certain it will import. Systems die. Having been through several data-loss incidents, I find it is much preferable to be able to pull 1 disk than have to drag out 2 or three to transplant a ZFS pool.
Regarding the scrubs, I was trying to indicate that ZFS is more than just a raid manager, there are advantages to ZFS on even a single disk.
for a home NAS, the goal is maximising data storage capacity without a major hit on performance
If that were entirely true, striping would be the most popular ZFS pool arrangement, since you get performance and max storage.
Edit: this was not to say “you’re wrong”, just different approaches to storage.


A dual disk setup for ZFS (or any other kind of RAID) is super wasteful.
Based on what? I’ve been running ZFS since it was Solaris-only and raidz1/raidz2 are OK, but they come with complexity and performance penalties, and they’re somewhat less portable than a mirror. There are many advantages to simple mirrors: first-response reads, block correction, scrubs, etc.


Scripting it isn’t that tricky.
I’m old and use rsync, with mtime and not(!) operands and I’m able to keep 7 daily, 4 weekly, and 4 monthly backups. Runs every day, the rest of the backups are pruned in this rolling window.


Gate-keeping is a strong word… It also implies that people on the other side of the gate learned something to get there.
20 years ago we were doing what we could manually, and learning the hard way. The tools have improved and by now do most of the heavy lifting for us. And better tools will come along to make things even easier/better. That’s just the way it works.
Compare self-hosting to doing your own mechanic work on a vehicle: there are a lot of tasks that most ppl would benefit from learning the diy way to do it, but there are dangers to car repair that will never go away, like proper car support with jacks, securing wheels correctly, etc.
It would be neglectful for the community to say nothing and send ppl off to get pwned.
My range to the next node is 7 miles.
Lucky you. I’ve got max about 900m (about half mile) by putting a node up on the mountain near me, and that was intermittent and message transmissions were delayed between 5 and 10 min.
Like I said, great if it works. Not a whole lot of good use cases, and op’s is not a good use case.
AirTags work because there is a huge network of apple devices registering BT beacons. Meshtastic isn’t really viable unless there are other nodes around on the same channel, as you mentioned.
I have tried to use two LilyGo t-echos to GPS track my dog. Range is really poor in the mountains, so I basically couldn’t see the collared device unless it was within 100 to 150m away, which isn’t really helpful.
In a bigger urban area, more nodes didn’t help unless I was on the default channel, so same problem again, this timeline extra emf pollution.
Meshtastic is a great idea, but use cases are really limited.
APIs. Or the ends are achieved by sharing data between apps in common data storage. But I prefer to be a tourist in my infrastructure, I no longer hand-bomb changes to systems.
My design pattern is essentially to integrate more and more of the container creation into config. Right now I’m using ansible and it’s nice. More automation means troubleshooting has fewer variables.
I had issues yesterday with a package upgrade across several containers, and it ended up being two config changes. I cycle the apps and done. That’s it.