Contrary to a lot of posts that I have seen, I would say ZFS isn’t pointless with a single drive. Even if you can’t repair corruption with a single drive knowing something is corrupt in the first place is even more important (you have backups to restore it from right?).
And a ZFS still has a lot of features that are useful regardless. Like snapshots, compression, reflinks, send/receive, and COW means no concerns about data loss during a crash.
BTRFS can do all of this too and I believe it is better about low memory systems but since you have ZFS on your NAS you unlock a lot of possibilities keeping them the same.
I.e. say you keep your T110ii running with ZFS you can use tools like syncoid to periodically push snapshots from the Optiplex to your T110.
That way your Optiplex can be a workhorse, and your NAS can keep the backup+periodic snapshots of the important data.
I don’t have any experience with TrueNAS in particular but it looks like syncoid works with it. You might need to make sure that pool versions/flags are the same for sending/receive to work.
Alternatively keep that data on an NFS mount. The SSD in the Optiplex would just be for the base OS and wouldn’t have any data that can’t be thrown away. The disadvantage here being your Optiplex now relies on a lot more to keep running (networking + nas must be online all the time).
If you need HA for the VMs you likely need distributed storage for the VMs to run on. No point in building an HA VM solution if it just moves the single point of failure to your NAS.
Personally I like Harvester, but the minimum requirements are probably beyond what your hardware can handle.
Since you are already on TrueNAS Scale have you looked at using TrueNAS Scale on the Optiplex with replication tasks for backups?
Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.
Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won’t be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.
To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.
There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn’t an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can’t get rid of it. If you get into a situation where you don’t have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.
If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn’t too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup
More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896