

Thanks. Didn’t know these existed.
Thanks. Didn’t know these existed.
Mail servers?
How are you finding that these days? I thought all the anti-spam stuff meant that self-hosted email was just not worth it these days?
Hmm, I setup a Proxmox machine a while back because, well, all the cool kids seemed to do it - and plenty of “support” on youtube
I found Incus and it just seemed better, but it was harder to find info on (back then) and seemed a little unready
Now, I regret not sticking with my gut instinct as I’ve got to basically rip out Proxmox to get Incus in, which means all my VMs are prisoners (and us: 1 VM is Home Assistant!)
So, do you know if it’s possible to migrate my VMs across to Incus, or is it literally wipe drive, start again?
(Obviously the data in each VM can be backed up & restored into new VMs)
Ah, ok, good to know, thanks
I think you’ve misunderstood
Ok, OMV needs a separate (small) boot drive to install on (ie consider a M.2 / SSD on a USB adapter)
But, then all your (large) storage is used for the NAS.
OMV will run Docker containers, but their data would also be pointed to the large NAS storage.
| Small | Large |
|--------+-----------|
| OMV | Your Files|
| Docker | Data, etc |
Why the MTU change?
I always prefer bare metal for the core NAS functionality. There’s no benefit in adding a hypervisor layer just to create an NFS / SMB / iSCSI share
OMV comes with it’s own bare metal installer, based on Debian, so it’s as stable as a rock.
If you’ve used it before, you’re probably aware that it needs it’s own drive to install on, then everything else is the bulk storage pool… I’ve used various USB / mSATA / M.2 drives over the years and found it’s a really good way to segregate things.
I stopped using OMV when - IMO - “core” functions I was using (ie syncthing) became containers, because I have no use for that level of abstraction (but it’s less work for the OMV dev to maintain addons, so fair enough)
So, you don’t have to install docker, OMV automatically handles it for you.
How much OMV’s moved on, I don’t know, but I thought it would simplify your setup.
You should have all your data separately stored, it shouldn’t be locked inside containers, and using a VM hosted on a device to serve the data is a little convoluted
I personally don’t like TrueNAS - I’m not a hater, it just doesn’t float my boat (but I suspect someone will rage-downvote me 😉)
So, as an alternative approach, have a look at OpenMediaVault
It’s basically a Debian based NAS designed for DIY systems, which serves the local drives but it also has docker on, so feels like it might be a better fit for you.
Definitely suspect.
You should be able to let memtest run for days with no problems, so a reboot would either be a faulty stick or possibly a faulty motherboard slot.
Swap the RAM between slots to isolate the root cause
GeoIP blocking
You mention a firewall, but for any open ports still restrict the source IPs to limited ranges not “all”.
Personally, at my home’s edge firewall I have pfSense with pfBlocker and that uses a GeoIP database, so I can just pick the countries I want to allow in… you want to block as early as possible (ie at the VPS?), so you might have to look at options
If your family are in the same region, then it should be relatively easy to limit to a few ranges on the VPS
Here’s a quick search result: https://lite.ip2location.com/ip-address-ranges-by-country
I have basically the same setup, but with Radicale.
Radicale is really lightweight, but quite basic - which is fine for my needs.
Out of curiosity, what pulled you to use Baikal?
Ruckus … R500 I think (can’t check atm) from ebay.
MIMO, multiple SSIDs, etc, so work really well with the load of 2.4Ghz wifi home automation gadgets I have around the house, with 2 of us working from home on Zoom / Teams calls.
Reflash them with the “unleashed” firmware and you don’t need their controller.
You’ll probably need 2 devices: one actually connected to the external line (ie the modem part) and then your actual router / wifi access point(s).
Personally, I have a Fritzbox router configured into bridge mode so it just deals with the line signal and passes all the PPPoE / internet comms to a pfSense box I built (ie anything… an old thin client, new microATX, etc…)
I then have separate POE WAPs for wifi around the house, but pfSense can deal with radio drivers too if separate WAPs are too much today.
This way, if something goes wrong I can always go back to a single domestic router, keep the family happy, download anything I need to fix my setup and then move forwards again.
I like having separate components with an up/downgrade path
I’ve not looked into it properly yet, but - considering this is still free software - I don’t believe that level of granularity exists.
So, if I wanted to share my holiday photos from last week with 1 friend, and the photos from someone’s party to different friends… nope.
That’s an interesting point…
I’d like to share some (holiday) photos with my friends & family, so I can put those onto Pixelfed / Friendica / etc… I don’t necesarily want to share all the photos…
And that’s using the cloud.
Job Done. The self-hosting + federated cloud future is here!
Rejoice.
Have a look at Patrick Kennedy’s reviews on yoochoob under ServeTheHome - there’s some fantastic hardware available now
I ended up buying something from AliExpress, which I was initially reluctant to do - but Patrick’s reviews convinced me
For detailed reviews his site’s got the details from the videos: https://www.servethehome.com/
It depends on the sync / backup software
Syncthing uses a stored list of hashes (which is why it takes a long time for the initial scan), then it can monitor filesystem activity for changes to know what to sync.
Rsync compares all source and destination files with some magical high speed algorithm
Then, backup software does… whatever.
Back in the day on FAT filesystems they used the archive bit on each file’s metadata, which was (IIRC) set during a backup and reset with any writes to that file. The next backup could then just backup those files.
Your current strategy is ok - just doing an offline backup after a bulk update, maybe it’s just making that more robust by automating it…?
I suspect you have quite a large archive as photos don’t compress well, and +2TBs won’t disappear with dedupe… so, it’s mostly about long term archival rather than highly dynamic data changes.
So that +2TB… do you drop those files in amongst everything else, or do you have 2 separate locations ie, “My Photos” + “To Be Organised”?
Maybe only backup “MyPhotos” once a year / quarter (for example), but fully sync “To Be Organised”… then you’ve reduced risk, and volume of backup data…?
Got an example in BASH?Edit: someone else has a link