Can you give us your config file?
Can you give us your config file?
I found the whole experience tremendously frustrating and as you can see from some of the other responses and votes, the community does not consider that to be a reasonable reaction.
Hence why I bailed on the whole thing. I don’t need the grief.
I was told a tool was a resilient approach to drive management. It wasn’t, outside of a very specific set of circumstances.
Your analogy not only makes no sense but is exactly why I’m hostile about this. I’m not an expert at the specific limitations of a niche hard disk technology is, I must be a fucking moron or something, and ridicule is a clearly an appropriate reaction.
My idea of a useful tool for dealing with hard disks is not one that loses its shit when a hard disk is temporarily disconnected. That is not a ridiculous assumption. If that’s an issue then that should be made abundantly clear.
I assigned drives based on serial number and passed them through to TrueNAS and it couldn’t handle that reliably. I do not think I was asking for the moon on a stick.
The USB interface is a temporary measure, I was going to move the disks to an internal setup after testing but if it can’t handle something that basic then like fuck am I trusting it with something like migrating from USB SATA to internal SATA.
If I need both disks to access mirrored data then it’s as useful as a chocolate teapot.
I was trying to use it for a mirrored setup with TrueNAS and found it to be flakey to the point of uselessness. I was essentially told that I was using it wrong because I had USB disks. It allowed me to set it up and provided no warnings but after losing my test data for the fifth time (brand new disks - that wasn’t the issue) I gave up and setup a simple rsync job to mirror data between the two ext4 disks.
If losing power effectively wipes my data then it’s no damn use to me. I’m sure it’s great in a hermetically sealed data centre or something but if I can’t pull one of the mirrored disks and plug it into another machine for data recovery then it’s no damn good to me.
Looks like I angered people by not loving ZFS. I don’t feel like being bagged on further for using it wrong or whatever.
Edit: elaborated, got bagged on. Shocked Pikachu face.
I have given up on ZFS entirely because of how much of a pig it was.


I wish government organisations would host their own Mastodon servers. Get off Twitter.


Unless they’ve changed it in the last month then it’s 50 GB for zip.


I have five users, max, and barely any files. I don’t know which one Nextcloud AIO uses and I don’t care. There’s no wrong answer for such a small deployment. It uses whatever database Nextcloud felt was sensible as the default. They know more about picking the right tool for their requirements than I do.
If I’m building something for myself, then I care.


What’s so WTF about it? I’m repurposing old hardware and testing out the concept. I’m not shelling out a pile of cash on something that might not work for me.


These are internal drives connected to a desktop PSU wired to a USB interface to connect to the laptop.


Haha, yeah. It does make me wonder whether I should bin the whole TrueNAS approach entirely. It seems like a tremendous faff when I could just have the files mirrored to another disk as a backup.


The hard disks are on a separate power supply. The TrueNAS software is running on an old laptop so it effectively has UPS protection.
Yeah, another vote for Caddy. I’ve run nginx as a reverse proxy before and it wasn’t too bad, but Caddy is even easier. Needs naff-all resources too. My ProxMox VM for it has 256 MB of RAM!


Which logs specifically should I be checking?
zpool doesn’t see any pools to import. The system does see the disks but I’m not sure why the disks aren’t being checked for pools.


I’ll give it a shot. I was asking here in case it was a common thing that everyone else knows about (i.e. “Oh you’re running TrueNAS without a UPS? That’s a non starter, everyone knows that”.


It seems to either be completely fine and a power cycle makes no difference - or it loses the whole structure. I don’t know how I’m supposed to pull the disks back in. It doesn’t seem to detect that they’re already setup as part of a pool.
The pool I’ve created doesn’t vanish but it seems my only option for it is “manage devices” which takes me to the “Add VDEVs to the Pool” menu where my three disks show up as unassigned. The only presented option seems to be to wipe them in order to add them back to the pool.
Trying to search for this stuff doesn’t seem to give me anything useful. I don’t know what the intended behaviour is and what it is that I’m doing wrong. I would expect what should happen is that the disks come back online and get automatically added back to the pool again but no, apparently not?


What are your requirements?


I thought it was just the lads in my flat that called them eeeeeeeeeeeepeecees!
At least for me it’s
/etc/caddy/CaddyFile