

I’ve always geeked out about fan curves and monitoring, though I readily admit for most PCs leaving the default BIOS curve works fine enough.


I’ve always geeked out about fan curves and monitoring, though I readily admit for most PCs leaving the default BIOS curve works fine enough.


I just used the free chat with Claude, it created and tracked the files in its own webchat thingy. Being a kernel module, I was happy to manually check, copy/paste, compile, then run the code for each iteration.
Porting postmarketOS to a phone sounds like it may require some amount of manual running and explaining results back to the chat. Ultimately the output only starts to get functional when it hits reality and needs to keep adapting to feedback.
I wrote a blog on the process that more focuses on the journey and technical details of the controller chip.


I used a very similar method in a similar situation to albb0920. They describe it as vibe coding too.
The exact chip that handles everything is undocumented, but similar ones in the same series have datasheets. A maintained version of the linux driver handily collated all of the available datasheets and configurations used by different motherboards. Between that and my microcontroller/hardware experience, that side of things wasn’t too bad.
What I didn’t know anything about was writing an Illumos driver. I used the chatbot with a free claude account, compiling and running the code manually myself. I was impressed that it was able to build out the boilerplate and get something going at all. Course it took a few tried to get something that compiled and worked somewhat correctly. At some points I needed to look through the generated code and point out exactly what what wrong, but at least it would address it.
Code running in the context of the kernel is definitely not something I would have autonomously executing by a LLM. The end result is absolutely not something I would want put into the official Illumos source.
Looks like it has an ARM CPU, a RK3588. Similar ballpark to a Pi 5 in CPU performance.
Installing another OS would be technically possible but not easy, you’d need a Linux kernel with the RK3588 drivers already in it. Then there are differences between it and other RK3588 SBCs that could cause problems.
Much like you wouldn’t want to install anything other than raspbian on a Pi, you’d be best off with ugreen’s OS even if others are technically possible.


Sorry I might have misunderstood, you mentioned giving others access externally and it working fine. Normally, if you’ve set up the service to be publicly accessible on the internet, you can just visit the same site through the public DNS record and your public IP. At home or elsewhere, it’s all the same internet.
So either you’ve done something odd, or you’re talking about different, more private, internal only services?


Can you live with the services routing out and back into your public IP? If it all works for external users on the internet, doing nothing special should mean it works for you too?


I fucking love copyparty. It starts simple enough but then the millions of options and configs let you twist it into exactly what you need.
As someone that runs a server OS that doesn’t support docker, it is very refreshing to see a single binary project. It has a focus on being administrator friendly thats really fallen out of fashion these days.


I think it would be fine. Friend of mine has Immich on a N100, like you mentioned, the initial ML tasks on a big library takes over 24 hours but once it’s done it doesn’t need much. I don’t have experience running next cloud but the others you mentioned don’t need much RAM/CPU.
ZFS doesn’t need much RAM, especially for a two disk 4TB mirror. It soaks up free RAM to use as a cache which makes people think it needs a lot. If the cache is tiny you just end up hitting the actual speed of the HDDs more often, which sounds within your expectations. I dare say you could get by with 8 GB, but 16GB would be plenty.
I’d only point out if you’re looking for it to last 10 years, a neat package like the ugreen might bite you. A more standard diy PC will have more replaceable parts. Would be bigger and more power hungry though.


That is fricking sick dude!


Ah kay, definitely not a RAM size problem then.
iostat -x 5
Will print out per drive stats every 5 seconds. The first output is an average since boot. Check all of the drives have similar values while performing a write. Might be one drive is having problems and slows everything down, hopefully unlikely if they are brand new drives.
zpool iostat -w
Will print out a latency histogram. Check if any have a lot above 1s and if it’s in the disk or sync queues. Here’s mine with 4 HDDs in z1 working fairly happily for comparison:

The init_on_alloc=0 kernel flag I mentioned below might still be worth trying.


deleted by creator


After some googling:
Some Linux distributions (at least Debian, Ubuntu) enable init_on_alloc option as security precaution by default. This option can help to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic.
Unfortunately, it can lower ARC throughput considerably (see bug).
If you’re ready to cope with these security risks 6, you may disable it by setting init_on_alloc=0 in the GRUB kernel boot parameters.
I think it’s set to 1 on Raspberry Pi OS, you set it in /boot/cmdline.txtI think.


sync=disabled will make ZFS write to disk every 5 seconds instead of when software demands it, which maybe explains your LED behavior.
Jeff Geerling found that writes with Z1 was 74 MB/sec using the Radxa Penta SATA HAT with SSDs. Any HDD should be that fast, the SATA hat is likely the bottleneck.
Are you performing writes locally, or over smb?
Can try iostat or zpool iostat to monitor drive writes and latencies, might give a clue.
How much RAM does the Pi 5 have?


My understanding is that it’s technically against their TOS but loosely enforced. They don’t specify precise limits since they probably change over time and region. Once you get noticed, they’ll block your traffic until you pay. Hence you can find people online that have been using it for years no problem, while other folks have been less lucky.
Basically their business strategy is to offer too-good-to-be-true free services that people start using and relying on, then charging once the bandwidth gets bigger.
It used to be worse, and all of cloudflare’s services were technically limited to HTML files, but selectively enforced. They’ve since changed and clarified their policy a bit. As far as I’ve ever heard, they don’t give a toss about the legality of your content, unless you’re a neo Nazi.


I’m guessing the cloudflared daemon isn’t connecting to jellyfin. You want to use http://. Also is jellyfin the hostname of the VM? Using localhost or 127.0.0.1 might be better ways to specify the same VM without relying on DNS for anything.
Personal opinion, but I wouldn’t bother with fail2ban, it’s a bit of effort to get it to work with cloudflare tunnel and easy to lock yourself out. Cloudflare’s own zero trust feature would be more secure and only need fiddling around cloudflare’s dashboard.
Consider something like the aoostar R1 with Intel N100. Small and low power like a commercial consumer NAS but cheaper and you can chuck whatever OS you want.


Would you consider making the LLM/GPU monster server as a gaming desktop? Depends on how you plan to use it, you could have a beast gaming PC than can do LLM/stable diffusion stuff when not gaming. You can install loads of AI stuff on windows, arguably easier.


I’ve been using pcloud. They do one time upfront payments for ‘lifetime’ cloud storage. Catch a sale and it’s ~$160/TB. For something long term like backups it seems unbeatable. To the point I sort of don’t expect them to actually last forever, but if they last 2-3 years it’s a decent deal still.
Use rclone to upload my files, honestly not ideal though since it’s meant for file synchronisation not backups. Also they are dog slow. Downloading my 4TBs takes ~10 days.


My 10 year old ITX NAS build with 4 HDDs used 40W at idle. Just upgraded to an Aoostart WTR Pro with the same 4 HDDs, uses 28W at idle. My power bill currently averages around US$0.13/kWh.
Gotta preach for the cult of ZFS. It’s check summing, copy on write, and zraid features are all exactly what you want for data resilience. Plus you get transparent compression, and snapshots that can provide a bit of a stop-gap for your lack of backups.
It will normally soak up any and all memory for buffers and caches, but is meant to quickly free up when it’s needed by an app. Linux already does this on any filesystem with its page cache.
Oh and mounting a ZFS dataset on a new machine is super quick and easy, it stores it’s config on the drives themselves, so you can plug them into a new box and
zpool import -afand boom it’s mounted and ready to go.