• 3 Posts
  • 98 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • If you can’t access the hardware physically and you don’t have someone on site who can work on it, just drop the idea and get a VPS or whatever cloud based. No matter what hardware you plan to use. Anything and everything can happen. Broken memory module, odd power surge, rodents or bugs messing up with the system, moisture or straight up water leak corroding something, fan failure overheating the thing and so on.

    There’s only one single fact on the business that I’ve learned over 20something years I’ve been working with IT: All hardware fails. No exceptions. The only question is ‘when’. And when the time comes you need someone to have physical access to the stuff.

    I mean, sure, your laptop might run just fine for several years without problems or it might have shipping damage over that 3000km and it’ll break in a week. In either case, unless you have someone hands on the machine, it’s not going to do much.


  • It would be difficult to recommend Immich as a gallery app to someone who doesn’t have experience in selfhosting.

    You already have plenty of responses, but immich is not an gallery app. I’m in the process of migrating my photo libraries to immich and it’s 20+ years of memories. Some are originally taken by film camera and then scanned, others are old enough that camera phones just didn’t exist and we had “compact” digital cameras. Then there’s photos taken with DSLR and drone and obviously all of the devices have changed multiple times over the years, so relying on just a single device is just not going to work over time.

    All of those require some other system to store, organize, back up and enjoy than the device itself. And, as I have family, storing them on just my desktop would mean that no one else around would have easy access to them. And with immich I can easily share photos around when I carry DSLR with me in a family gathering or whatever.

    And then there’s the obvious matter of having enough storage. Even my desktop doesn’t have a spare terabyte right now to store everything, I need the hardware anyways, so it just makes sense to keep them separated from my workstation which I can now do whatever I want with without worrying I’d lose any of those precious memories. And for the server part, I’m having one around anyways for pihole, home assistant, nextcloud to store/back up other data and so on, so for me it’s the most convenient approach to run immich server on there too.

    And for the backup side of things. I’ve tried manual backups with various stuff over the years. It’s just not going to work for me. I either forget or life gets in the way or something other happens and then I’m several days or weeks behind the ‘schedule’. With dedicated server I don’t have to do anything, everything is running automatically at the background while I’m sleeping or doing something else more interesting than copying over a bunch of files.




  • You are absolutely correct. I don’t mind the few GB’s worth of data for the operating system, a single video with my drone is likely more than that and it’s not something you can deduplicate nor compress very well. If I really wanted I think it should be possible to squeeze the operating system at least below 2GB, but it’s just not worth the effort. I just want that the memories over 20+ years I have on the thing to remain.






  • Well that’s a interesting approach.

    First, you would need either a shared storage, like NAS, for all your devices or for them all to have equal amount of storage for your files so you can just copy everything to everywhere locally. Personally I would go with NAS, but storage problem in general has quite a few considerations, so depending on size of your data, bandwidth, hardware and everything else something other might suit your needs better.

    For the operating system, you would of course need to have the same OS installed on each device, and they all would need to run the same architecture (x86 most likely). With linux you can just copy your home directory over via shared storage and it’ll take care of most of the things, like app settings and preferences. But keeping the installed software in sync and updated is a bit more tricky. You could just enable automatic updates and potentially create a script to match installed packages between systems (Debian-based distros can use dpkg --get-selections and --set-selections, others have similar tools), so you would have pretty closely matching environments everywhere.

    Or if you really want to keep everything exactly the same you could use Puppet or similar to force your machines into the same mold and manage software installations, configuration, updates and everything via that. It has a pretty steep learning curve, but it’s possible.

    But if you want to match x86 workstations with handheld ARM devices it’s not going to work very well. Usage patterns are wildly different, software availability is hit or miss and the hardware in general differs enough that you can’t use the same configs for everything.

    Maybe the closest thing would be to host web-based applications with everything and use only those, but that limits heavily on what you can actually do and doesn’t give you that much flexibility with hardware requirements, meaning either that your slower devices crawl to halt or that your powerful workstation is just sitting idle on whatever you do.

    Maybe better approach would be to set up remote desktop environment on your desktop and just hop on to that whenever needed remotely. That way you could have the power on demand but you could still get benefits from portable devices.


  • Better internet connection - a lot of hosts have 40Gbps connections now, and it’s a data center grade connection with a lower contention ratio.

    And also better infrastructure in general. VPS’s are running on a datacenter with (most likely) failsafes for everything. Multiple internet connections, pretty beefy setup for power reundancy with big battery banks and generators, multiple servers to take your stuff over in case a single unit fails, climate controls with multiple units and so on.

    I could get 10Gbps connection (or theoretically even more) to my home, but if I want all the toys the big players are working with that would mean investing at least several tens of thousands euros to get anywhere and more likely hundred or two thousands to build anything even near to the same level. And that doesn’t include things like having mechanics to maintain generators, security stuff to guarantee physical safety and so on, so even if I had few millions to throw on a project like this it wouldn’t last too long.

    So, instead of all that I have a VPS from Hetzner (I’ve been a happy customer with them for a long time) for less than a hamburger and fries per month. And that’s keeping my stuff running just fine. Obviously there’s caveats to look for, like backups in case Hetzner suddenly doesn’t exist anymore for whatever reason, but the alternative could as well be setting up a server farm in the Moon as that’s about as difficult to reach as getting similar reliability from them for ~100€/year.


  • Well, sure, I could leave just the z-wave endpoint at home and move the server to the cloud, but that would mean that none of my automations would work if the network happens to be down. And my ISP is pretty damn good to keep me on line, but that’s one thing of my home automation I’m not willing to compromise. Everything has to be local and not dependent on any kind of connectivity to outside.

    Sure, the things rely on the infrastructure (networking very much included) I have in place in my house and it’s not perfect by any stretch and my HA server in itself would most likely be ‘safer’ in the cloud, but it still is my home automation and I want to keep it local to avoid connectivity issues, latency and other stuff beyond my control.

    And sure, should my server PSU die tomorrow, it would bring the whole system down. As I mentioned, the setup is far from perfect, but it’s built the way I like it and, for me, this is the best approach. You may weigh pros/cons differently, and that’s perfectly fine. I have my reasons and you have yours, both equally valid.

    But I’d still rather not mess with hardware, I just need at least one physical server and other stuff around to keep things running the way I like them.


  • Without a doubt a lot do, but I personally couldn’t care less. I have a server at home, but that’s just a necessary evil. If I could I’d just rent hardware for everything, but there’s technical and obviously financial limitations with that.

    And hosting pretty much anything is practically identical regardless of the platform. Sure, there’s exceptions, like my Home Assistant server with z-wave, which needs to be physically nearby my other stuff, but things like fediverse instances and other browser-based stuff are exactly the same to maintain regardless of the underlying platform.


  • My personal opinions, not facts:

    For hdd’s to be used as long term storage, what is usually the rule of thumb? Are there any recommendations on what drives are usually better for this?

    Anything with a long history, like HGST or WD (red series preferably). Backblaze among others publish their data on longevity of drives, so look for what they’re offering. On ebay (and others) there’s refurbished drives available which are pretty promising, but I have no personal experience on those.

    Considering this is going to store personal documents and photos, is RAID a must in your opinion? And if so, which configuration?

    Depends heavily on your backup scheme, amount of data and available bandwidth (among other things). Raid protects you against a single point of failure on storage. Without raid, you need to replace the drive, pull data back from backups and while that’s happening you don’t have access to the things you stored on the failed disk. With raid you can keep using the environment without interruptions while waiting for a day or two for a replacement. If you have fast connection which can download your backups in less than 24 hours it might be worth the money to skip raid, but if it takes a week or two to pull data back, then the additional cost of raid might be worth it. Also, if you change a lot of data during the day, it’s possible that a drive failure happens before backup is finished and in that case some data is potentially lost.

    On which level of RAID you should use, it’s a balancing act. Personally I like to run things with RAID5 or 6 even if I have a pretty decent uplink. Also, you need to consider what’s the acceptable downtime for your services. If you can’t access all of your photos in 48 hours it’s not a end of the world, but if your home automation is offline it can at least increase your electric bill for some amount and maybe cause some inconvenience, depending on how your setup is built.

    And in case RAID would be required, is ubuntu server good enough for this? or using something such as unraid is a must?

    Ubuntu server is well enough. You can do either sofware raid or LVM for traditionald RAID setup or opt for a more modern approach like zfs.

    I was thinking of probably trying to sell the 1660 super while it has some market value. However, I was never able to have the server completely headless. Is there a way to make this happen with a msi tomahawk b450? Or is only possible with an APU (such as 5600g)?

    No idea. My server has a on board graphics, but I haven’t used that for years. But it’s a nice option to have in case something goes really wrong. You can still sell your 1660 and replace that with the cheapest GPU you can find from ebay/whatever, at least as long as you’re comfortable with the console you can fix things with anything that can output plain text. If your motherboard has separate remote management (generally not available in consumer grade stuff) it might be enough to skip any kind of GPU, but personally I would not have that kind of setup, even if remote management/console was available.

    If you guys find any glaring issues with my setup

    I don’t know about actual issues, but I have spinning hard drives a lot older than my kids which still run just fine. Spinning rust is pretty robust (at least in sub 4TB capacity), so unless you really need the speed traditional hard drives still have their place. Sure, a ton more of spinning drives has failed on me than SSD’s, but I have working hard drives older than SSD as a technology has been around (at least in the sense of what we have now), so claiming that SSD’s are more robust (at least on my experience) is just a misnderstood statistics.


  • this will limit ZFS ARC to 16GiB.

    But if I have 32GB to start with, that’s still quite a lot and, as mentioned, my current usage pattern doesn’t really benefit from zfs over any other common filesystem.

    As for using a simple fs on LVM, do you not care about data integrity?

    Where you get that from? LVM has options to create raid volumes and, again as mentioned, I can mix and match those with software raid however I like. Also, single host, no matter how sophisticated filesystems and raid setups, doesn’t really matter when talking about keeping data safe, that’s what backups are for and it’s a whole another discussion.


  • ZFS in general is pretty memory hungry. I set up my proxmox sever with zfs pools a while ago and now I kind of regret it. ZFS in itself is very nice and has a ton of useful features, but I just don’t have the hardware nor the usage pattern to benefit from it that much on my server. I’d rather have that thing running on LVM and/or software raid to have more usable memory for my VM’s. And that’s one of the projects I’ve been planning for the server, replace zfs pools with something which suits my usage patterns better, but that’s a whole another story and requires some spare money and some spare time, which I don’t really either at hand right now.


  • Steps 1, 2, 4, 5 and 7 just need some time. I have the stuff pretty much thought out and it’s just a matter of actually doing the things. I was sick majority of November, but if it wasn’t for that those would have already been completed. The rest need either planning or money. Immich setup would ideally need 2x2TB ssd drives (on raid1 setup) but that’s about 500€ out of the pocket and home assistant setup needs time to actually work with it and to plan things forward. Additionally HA setup could use a floor thermostat or two, some homeESP gadgets and so on, so it needs some money as well.

    Majority of the stuff should be taken care of until February, the rest is more or less open.


  • A ton.

    1. Set up email and website hosting on a VPS to replace current setup
    2. Get more solid state storage for my home server and finnish immich setup (import photos and all that)
    3. Set up proper backups for the home server
    4. Migrate current Unifi controller to home server
    5. Local VPN server to access home assistant and other services even when travelling
    6. Spend some time with my home assistant server, fine tune automations, add some more, add sensors and more controls, maybe add a wall mounted tablet for managing the thing and so on, it’ll never end and need a visit or two from electrician too
    7. Better isolation for IOT things on my network. I already have separate VLAN for them without internet access, but it’s a bit incomplete project

    And then “would be nice” stuff:

    1. Switch Dahua NVR to something else. Current one works in a sense that it stores video, but movement tracking isn’t really perfect and the whole individual NVR box is a bit lacking both in speed and in features
    2. Replace the whole home server (currently running proxmox, which in itself is fine). It’s a old server I got from work, and it does work, but it’s not reundant and it’s getting old. So something less power hungry and less noisy would be nice. It just asks some money and time, which I have neither in surplus, so we’ll see.
    3. Move home assistant from a raspberry pi to the home server. Maybe add zigbee capabilities next to z-wave and wifi.

    And likely a ton more which I don’t remember right now. Money and specially spare time to tinker are just lacking.


  • Use the friend’s network as a VPN/proxy/whatever to obscure my home IP address

    And then your friend is responsible for your actions on the internet. The end goal you described is so vague that at least I wouldn’t let your raspberry connect on my network.

    There’s a ton of VPN services which give you the end result you want without potential liability or other issues for your friend. If you just want to tinker, this thread has quite a bit of information to get you started.


  • So, you want the traffic to go other way around. Traffic from the HomeNet should go to the internet via FriendNet, right? In that case, if you want the raspberry box to act as a proxy (or vpn) server, you need to forward relevant ports on the FriendNet to your raspberry pi so that your HomeComputer can connect to the raspberry box.

    Or you can set up a VPN and route traffic trough that to the other way. Tunnels work both ways, so it’s possible to set up a route/http proxy/whatever trough the VPN tunnel to the internet, even if the raspberry box is the client from VPN server point of view.

    I don’t immediately see the benefit of tunneling your traffic trough the FriendNet to the internet, unless you’re trying to bypass some IP block of something other potentially malicious or at least something being on the gray area. But anyways, you need a method for your proxy client to connect to the proxy server. And in generic consumer space, that needs firewall rules and/or port forwarding (altough both are firewall rules, strictly speaking) so that your proxy server on raspberry box is visible to the internet in the first place.

    Once your proxy server is visible to the internet it’s just a matter of writing up few scripts for the server box to send a message to the client end that my public IP is <a.b.c.d> and change proxy client configuration accordingly, but you still need some kind of setup for the HomeNet to receive that, likely a dynds-service and maybe some port forwarding.

    Again, I personally would set up something like that with a VPN tunnel from raspberry box to the HomeServer, but as I don’t really undestand what you’re going after with setup like this it’s impossible to suggest anything else.


  • So, you want a box which you can connect to any network around and then use some other device to connect to your raspberry box which redirects your traffic trough your home connection to the internet?

    The easiest (at least for me) would be to create VPN server on your home network. Have a dyndns setup on your home network to reach it in the first place, open/redirect a port for openvpn (or whatever you like) and have a client on raspberry running on it. After that you can connect your other device to the raspberry box (via wifi or ethernet) and create ip-forwarding/NAT rules for your traffic so that everything goes to the raspberry box, then to your home server via VPN tunnel and from there to the internet.

    You can use any HTTP proxy with this, or just let the network do it’s thing and tunnel everything via your home connection, but in either case the internet would only see your encrypted VPN traffic to your home network and everything else is originated from your home connection.

    You can replace VPN with just HTTP proxy, but both are pretty close the same on the terms of ‘cost’, so your network latency, bandwidth and other stuff doesn’t really change regardless of the approach. But if you just want the HTTP proxy you can forward a port on your home network for the proxy and just use that on your devices without raspberry box and achieve the very same end result without extra hardware.

    And obviously, if you go with VPN tunneling for everything, you don’t need raspberry for that either, just a VPN client which connects to your home network and that’s it. The case where you have devices which can’t use VPN directly would benefit from the raspbery box, but if you already can set up a HTTP proxy for the thing you’re actually using, I don’t see the benefit of running a separate hardware for anything.

    Some port forwarding or opening ports from firewall is needed on any scenario. But there’s a ton of options to limit access from anyone accessing your stuff. However, this goes way beyond the scope of your question and more details are necessary on what you’re actually trying to achieve with setup like this.