If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.
If you don’t need to host but can run locally, GPT4ALL is nice, has several models to download and plug and play with different purposes and descriptions, and doesn’t require a GPU.
I self host services as much as possible for multiple reasons; learning, staying up to date with so many technologies with hands on experience, and security / peace of mind. Knowing my 3-2-1 backup solution is backing my entire infrastructure helps greatly in feeling less pressured to provide my data to unknown entities no matter how trustworthy, as well as the peace of mind in knowing I have control over every step of the process and how to troubleshoot and fix problems. I’m not an expert and rely heavily on online resources to help get me to a comfortable spot but I also don’t feel helpless when something breaks.
If the choice is to trust an encrypted backup of all my sensitive passwords, passkeys, and recovery information on someone else’s server or have to restore a machine, container, vm, etc. from a backup due to critical failures, I’ll choose the second one because no matter how encrypted something is someone somewhere will be able to break it with time. I don’t care if accelerated and quantum encryption will take millennia to break. Not having that payload out in the wild at all is the only way to prevent it being cracked.
Wait, this sounds awesome! I haven’t had time to dig into it more yet but does this mean I could host my own “pod” allowing my data to stay where I want it and be backed up how I want, while allowing my fediverse identity to be used on multiple different federated services?
I’d prefer GNU’s ddrescue just because I find it more robust and has better progress output. It’s functionally the same interface but lets you use a mapfile to resume sessions should anything happen to interrupt the copy.
Arguably I’m against this because you never know what’s going to happen and the conventional wisdom for appliances like this is to just backup any important configs, backup your containers and vms, then do a fresh install from the latest install media on the new disk followed by a restore of the backups. It might take a little more time but it’s negligible and allows you an opportunity to review your current configs, make necessary changes, and ensure your backups are working as intended.
I have the same model, powering 3 machines with an average load of ~125w when it switches to battery power. I have a NUT host on one of the servers which will broadcast the outage for the other machines and the whole stack shuts down after 30 seconds and switches off the UPS at the very end. Gone through about 4 or 5 true power events now and double that in testing (overzealous I know) but the UPS is 2.5 years old now and is doing just fine. I have a spare battery because I heard ~3 years is normal but so far no indication it’s reaching replacement yet.
I think the important thing for these is to not run them down to 0. They’re only good for one event at a time and shouldn’t constantly be switching over without basically a full day of recharging again (more like 16h to recharge).
I can see consistent brownouts and events being a problem for these little machines. I’m planning on upgrading to a rack solution soon and relegating this one to my desktop in the other room (with a fresh battery of course).
I agree. Except boosts. That should die and up/downvotes should just be the thing driving aggregation. Nobody boosts enough to make a difference anyways and some apps just tie the boost button to the upvote button so the feature actually gets used as expected (if enabled). It’s already hard enough to get regular people onboard here, with all the instance and account confusion with hit or miss syncing options and instances disappearing sometimes.
Maybe nobody keeps a complete file? That way no one machine can keep a complete copy of anything let alone access it if it was stored in a single chunk of storage cryptographically? There’s already so much risk for hosts here not sure there’s a way to be safer without invasive technologies.
What are the features you need from your host? If it’s just remote syncing, why not just make a small Debian system and install git on it? You can manage security on the box itself. Do you need the overhead of gitlab at all?
I say this because I did try out hosting my own GitLab, GitTea, Cogs, etc and I just found I never needed any of the features. The whole point was to have a single remote that can be backed up and redeployed easily in disaster situations but otherwise all my local work just needed simple tracking. I wrote a couple scripts so my local machine can create new repos remotely and I also setup ssh key on the remote machine.
I don’t have a complicated setup, maybe you do, not sure. But I didn’t need the integrated features and overhead for solo self hosting.
For example, one of my local machine scripts just executes a couple commands on the remote to create a new folder, cd into it, and then run
git init —bare
then I can just clone the new project folder on the local machine and get started.