You do realize that you lose quality with wach encode, right?
It’s not AS bad when bitrates are high, but it’s still there.
You do realize that you lose quality with wach encode, right?
It’s not AS bad when bitrates are high, but it’s still there.


Because you don’t train your self-hosted LLM.
As a result you only pay for the electricity of computing your tokens (your request), this can be especially reasonable if the same machine also does local game streaming and or transcoding, and thus already has the requirements to host a LLM.
If you don’t have rather unreasonable means, your local LLM is just very much more limited in parameters (size), and will not be as good as other, much larger models.
Privacy, Ethics and personal interest usually are the largest drivers from what I can tell.


hardware is fine. If you’re not experienced the 3000km will fuck you though. Stuff will arise where you will need to get at it.
I’ve been using two laptops als “servers” for years.
well, the first one died after about 6 years of use.
But I can get at them reasonably.


why can I not install it?
first time i just get the share button on droid-ify
/e: installig directly from github was no issue


Curious if this will fix my one issue I have with Finamp.
I have some quite large playlists I’d like to listen on shuffle. Finamp doesn’t do that well at all. (It seems it only shuffles what it has cached or something, as it seems to shuffle “only” the first 100 or so songs. of 3000+)
Gameyfin exists as well