

I realize you’re looking for new toys, but ‘anywhere in the flat’ includes ‘under a pile of pillows.’ Otherwise, for personal photo-sized storage, just put a couple 2.5mm format SSDs in the QNAP.


I realize you’re looking for new toys, but ‘anywhere in the flat’ includes ‘under a pile of pillows.’ Otherwise, for personal photo-sized storage, just put a couple 2.5mm format SSDs in the QNAP.


Depending on the board in your mini-server, you may have enough SATA ports to plug in directly. I have a system similar to what you’re describing (N100 with 4x 2TB HDDs with 1.5TB data): 2 of those drives are set up in RAID1 (mirror), and once a month, I plug in one of the spares, rsync the array to it, and unplug it. Every 3 months or so, I swap the offline drive with an offsite drive. I used to use a USB dock for the offline drive, but I got a 3-bay hot-swap enclosure to make the whole process faster and easier.
The server shares the array via NFS and SMB, and it is absolutely a NAS for all my other systems.
If you expect to exceed 2TB data within 2 years, then you’ll need to replace all 4 of those 2TB drives in 2 years. You might, today, get a pair of 4 TB drives and one 2TB, use the 4TB as your main storage, the 2TBs as rotating backups, and wait until you actually outgrow 2TB to upgrade the backups.
I see you’re getting lots of advice just to use c/selfhosted as a free consultant. That’s good advice if you’re self-motivated and focused.
If you want someone to be a coach through the process, to keep you focused and moving, that’s a) a slightly different skillset and b) worth putting in the description. I mention this only because I have a bunch of aspirational projects on my to–do list that have just sat there for literally years because of perfectionism, anxiety, and maybe some undiagnosed ADHD. I’ll also counter by noting that a lot of people, this time of year, buy a gym membership on the theory that spending the money will somehow force them actually to go to the gym, only to find that spent money is not actually a motivator.


Great project. I like the 1-star reviews complaining about the lack of advertising and tracking.


If you want it to be an actual community service, then you want it to be something that outlives your residence, your tenure as event coordinator, and your interest in being the neighborhood IT guy. It’ll be much easier to transfer control of a VPS to your successor than to give them hardware that also hosts a bunch of your personal services.
You can start with a very small, nearly free VPS while you recruit users & scale up as (if) anyone bites. Probably even get the HOA to pay for it.
I got my Pi4 to be a media player - LibreElec or Kodi - for my old, not-smart TV. It plays my library of CDs&DVDs, frontend for OTA TV, and a variety of streaming services. Fanless, so it doesn’t distract from audio, low power, so I don’t mind leaving it on 24/7. You can configure it to listen to a USB IR receiver, but I control mine from phone via web. The actual media library/NAS and tvheaded run on an old desktop in another room.
My favorite thing is all the sensors you can hook up. Adafruit & Sparkfun have a wide array of sensors with breakout boards for simplicity and well documented python libraries. I started just logging temperature, humidity, then air quality, CO2 to my own database and web page, but eventually expanded to full HomeAssisstant system.
Pihole.


Tandoor: I ended up there because it has an API that I can access and cross-reference to my grocer (Kroger.com also has API) to get current pricing, calculate recipe costs, nutrient costs, or find what’s on special this week. It’s theoreticcally possible, but I haven’t sorted out how to integrate that directly into tandoor & its shopping lists.
A lot depends on how many users you expect and how much media you expect. For one or two users with that stack, transcoding media is really the only CPU load. If most of your media is already in your desired format, then that’s not a big deal.
My stack is pretty similar (no *arr, plus tvheadend, homeassistant and a kodi frontend) for two users and it sits near idle all day long. It runs on an N100 NAS system off Aliexpress with 16GB and will transcode 1080p to x264 at just about playback speed… System runs from a 100 GB nvme, with a couple half-full 4 TB WD Reds for data. 35-ish Watts, maybe an extra 5 when actively transcoding. Used to be ~150 USD,
If you want a lot of 4k content, then I’d definitely go with the GTX 1660.
I made a self-hosted forgejo repository of /etc. Commit messages aren’t always informative, and I’ve never actually gone back to the repository to figure something out, but it’s there, just in case. Me cosplaying a sysadmin.


It looks like he’s split out the individual USB wires, run the power to the USB port, and the signal wires to different places on the exposed board, maybe to force fast mode in the charger. Then just buried everything in silicone for insulation and to keep wires from pulling loose.


Looks like California, USA


From the power draw, it looks like lemmy federation got hold of it around 16:30. As of 17:20, it’s still holding up.
I understand the Mastodon federation system can be very DDOS-ey on web sites, if you’re tempted to post it there.
Cool project.
It is still a logical argument, especially for smaller shops. I mean, you can (as self-hosters know) set up automatic backups, failover systems, and all that, but it takes significant time & resources. Redundant internet connectivity? Redundant power delivery? Spare capacity to handle a 10x demand spike? Those are big expenses for small, even mid-sized business. No one really cares if your dentist’s office is offline for a day, even if they have to cancel appointments because they can’t process payments or records.
Meanwhile, theoretically, reliability is such a core function of cloud providers that they should pay for experts’ experts and platinum standard infrastructure. It makes any problem they do have newsworthy.
I mean,it seems silly for orgs as big and internet-centric as Fortnite, Zoom, or forturne-500 bank to outsource their internet, and maybe this will be a lesson for them.
I’m not a systemd guru, but it turned out pretty easy. https://dev.mysql.com/doc/refman/5.7/en/using-systemd.html#systemd-multiple-mysql-instances Basically just make [] sections in my.cnf then systemd start mysqld@copy and systemd is smart enough to pass copy into mysql.
I did it slightly different, using systemctl edit mysql@.service to define different default files for each instance, then [] sections in each of those files. Seems like the port option for each has to go in a [] section, but otherwise ok.
Replication because I want to put some live data, read-only, on the VPS, exposed to the world while the ‘real’ database stays safely hidden in my intranet. SSH tunnel so the replica can talk to the real database.
I’m hung up on unrecognized charset #255. Tried rolling everything back to utfmb3; suppose I could go all the way to Latin1. I imagine there’s a lot of depth I could learn, but dropping mariadb for mysql seems like the path of least resistance right now.
eta: got the character set sorted. Had to make a new dump, confirm that everything in the dump was utf8mb3, then re-prime the replica with that data. Wasn’t enough just to change the character sets internally.
I’ve been trying to convince a VPS to run two instances of mariadb - one for local databases, one to replicate the homelab. Got mariadb@server and mariadb@replica sorted out through systemd, but now stuck on replication from mysql to mariadb. Looks like I’ll be ripping out mariadb and putting everything on mysql.


I’ve got all my internet infrastructure on one monitor - 50W for the N100, the cable modem, an ooma VOIP device, and UPS. I’d guess the server, with its WAP, 4x GbE ports, 2x spinning disks, and USB TV tuner, is 35-ish of those watts.


If you have the spare cash, I found the N100 NAS motherboard to be a great source of occasional weekend projects, and now it very definitely looks like I’ve gone overboard.
I started out just wanting a file server to store backups.then…
It didn’t feel like a lot, because it took years. Among the amazing things has been all the times I’ve been able to upgrade the motherboard by just plugging the HD into the new board. Started out just using old desktop boards; the N100 was the first purpose-bought board, and also the most complicated upgrade, because it added UEFI. There definitely are projects out there that don’t have an arm option, so something x86 is more flexible.


Pi 4 should be plenty to run Jellyfin, homeassistant, pihole and octoprint. Docker setup is pretty straightforward, and I can vouch that HA & pihole containers work great on RPi, if you want to leave the Jellyfin setup as-is and put the others alongside.
If you’re looking for an excuse to expand, my vote is for an N100 type system. I got one with 4 ethernet ports, PCIe for a wifi card, couple of NVME slots, and a half dozen SATA ports for $100-150. That’s a huge step up in potential without much increase in power draw. With the right wifi card, you can even use it to replace your WAP/router.
Good discovery tools are essential on a federated platform. An important part of twitter, facebook, and reddit success is/was that that they were the place for their particular style of content. You had a pretty good chance of being able to discover your old high school friends, because they were on the one platform. Then the (early) algorithm started discovering for you all the obscure content similar to your history.
Discovery has to work differently in a federated system. You can search for communities on Lemmy, but if your instance doesn’t already have someone subscribed to a community, then you’re not going to find it.