

I don’t see why not. Again, the resource footprint is so tiny that you can just throw in Mumble anywhere. You can make it tinier still if you limit sending pictures via that chat and allocate a maximum bandwidth via the config.


I don’t see why not. Again, the resource footprint is so tiny that you can just throw in Mumble anywhere. You can make it tinier still if you limit sending pictures via that chat and allocate a maximum bandwidth via the config.


If pi zero, you’re serving 12 users low latency over wifi? Does it route the actual audio?
Yes, it’s sufficient. I wouldn’t advise it due to the extra overhead of wireless packet loss, but it’s absolutely technically possible. Don’t overestimate how little bandwidth voice chat really needs. It’s like 10-50kB/s per person and you’re unlikely to ever have more than 2 or 3 people talking at a time.


So, I’ve been having issues with voice chat on Discord and I’m looking for alternatives. In my search, I came across Mumble, here. Does anyone here have experience, or information regarding Mumble, or a better alternative to Discord with better latency? Is it relatively easy to set up? Is it safe? Any advice and help is greatly appreciated.
Been running a server for my friends for over a decade now. Can recommend. It’s just one apt-get to set up, runs on a Pi Zero for a dozen people, has clients available for pretty much any platform and doesn’t really require any maintenance. Latency will depend on the routing between you and your friends’ ISPs, of course, but the whole purpose of the software itself was to provide a low-latency voicechat server for gaming.
But: That’s it. You don’t get anything else. It’s a barebones voice chat server. You can set up rooms and have basic text-functionality, but you don’t get any fancy user management, no full-fledged chatrooms, no persistence beyond the room setup and only limited backend options. Keep that in mind.


I do a presentation of the Fediverse to my college students and will soon be giving short workshops to organization as well. I realize that a viable, decentralized altenative to Facebook is IMO the biggest missing piece of the puzzle. We need something that offers some kind of central platform for networking, events, groups
Well if you want decentralised solutions, there’s Mattermost and there’s just a plain old Matrix server. Both are better-suited to collaboration projects than Facebook ever was. I’d argue the only reason it ever morphed into that role in the first place was because everyone was on there, it had little to do with features.


Basically what the title says. I know online providers like GPTzero exist, but when dealing with sensitive documents, I would prefer to keep it in-house. A lot of people like to talk big about open source models for generating stuff, but the detection side is not as discussed I feel.
I wonder if this kind of local capability can be stitched into a browser plugin. Hell, doesn’t even need to be a locally hosted service on my home network. Local app on-machine should be fine. But being able to host it as a service to use from other machines would be interesting.
I’m currently not able to give it a proper search but the first glance results are either for people trying to evade these detectors or people trying to locally host language models.
In general it’s a fool’s errand, I’m afraid. What’s the specific context in which you’re trying to apply this?
I read about OLLAMA, but it’s all unclear to me.
There’s really nothing more to it than the initial instructions tell you. Literally just a “curl -fsSL https://ollama.com/install.sh | sh”. Then you’re just a “ollama run qwen3:14b” away from having a chat with the model in your terminal.
That’s the “chat with it”-part done.
After that you can make it more involved by serving the model via API, manually adding .gguf quantizations (usually smaller or special-purpose modified bootleg versions of big published models) to your Ollama library with a modelcard, ditching Ollama altogether for a different environment or, the big upgrade, giving your chats a shiny frontend in the form of Open-WebUI.


Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?
If it aint broke, don’t fix it 🤷
Apples and oranges.
Package managers only install a package with defaults. These helper scripts are designed to take the user through a final config that isn’t provided by the package defaults.
Whether there’s a setup wizard doesn’t have anything to do with whether the tool comes from a package manager or not. Run “apt install ddclient”, for example, it’ll immediately guide you through all configuration steps for the program instead of just dumping a binary and some config text files in /etc/.
So that’s not the bottleneck or contradiction here. It’s just very unfortunate that setup wizards are not very popular as soon as you leave Windows and OSX ecosystems.
There’s literally no good reason to replace it with a shell script on a website.
I fully agree that a package manager repository with all those tools would be preferable, but it doesn’t exist, does it? I mean… content is king. If the only way to get a certain program or functionality is a shell script on a website, then of course that’s what is going to be used.


I know a lot of people like buying used drives but the ones for sale are usually loud enterprise edition drives which won’t work for me. Should I buy the drives now or wait until BF for a possibly better sale?
HDD prices haven’t really moved in any meaningful way over the course of the past years and I don’t recall them ever moving significantly even during special promotions (short of pricing errors). I strongly suggest to treat high-capacity hard drives as the luxury consumables that they are and just buy them as needed. Unless you particularly enjoy bargain hunting as a passtime I really don’t think it’s worth the effort and opportunity costs in this particular context.


I stopped to retag and rename music files. Right now I prefer them to be as original as possible because preservation reasons.
Aren’t you then just preserving some random music ripper’s organizational preferences or default settings?
Either way I don’t see any issues with adding more tagged information. More information always more good 😁


IPMI and ECC are not on your wishlist, correct?


Yes, it hurts discoverability. How can you have a community without people?
I’d like to ask it the other way around: How many people would it need until you’d say “Yep, that’s a community alright.”?


I was an idiot and bought a high end TPLink router, I can’t even use Vlans without signing up for their back door service.
Hm, at least with their enterprise equipment you can completely disable Omada.


Enshittification is inevitable for all free services (services as in with a server component).
No, it is not that bleak. It is only inevitable when there is an active push for a short-term maximization of user base monetization (which is very much in the nature of VC). It can usually be avoided with products that are wholly under the ownership of all users (such as a cooperative or a government-provided service) or - only if one is lucky - with products of financially independent private enterprises under vaguely benevolent and unhurried leadership (such as Steam, to some extent)


Monetary needs and all that. If it’s a startup with VC then there is either not enough people paying or not enough private users supporting by other means like bug fixing, support, etc. Or greed by VC.
Well, VC is greedy by design. A VC-funded business will never be optimized for longevity, a good product or happy customers. They may achieve those things en passant, but they’re never the objective.
For example: Any case of “there is not enough people paying” can also be rendered as “the scale and moving speed of the business is way off”.


Kind of offtopic: Can we call something offline if you need a server to run it?
Sure, you could run it on your own PC and that’s it, but I don’t think that method fit well with this community
Er… maybe I am misunderstanding your post but this community is literally built around hosting your own local infrastructure.
So, uh, what’s wrong with that?
… with damaging infrastructure? Well, presumably the infrastructure will no longer be as good at serving its original purpose once it is damaged.


Why do people host LLMs at home when processing the same amount of data from the internet to train their LLM will never be even a little bit as efficient as sending a paid prompt to some high quality official model?
inb4 privacy concerns or a proof of concept this is out of discussion, I want someone to prove his LLM can be as insightful and accurate as paid one. I don’t care about anything else than quality of generated answers
If you ask other people for their reasoning and opinions, it doesn’t really make any sense to put something “out of the discussion”, does it? :P
But no, if you have no qualms about sharing your innermost feelings, sexual preference or illegal plans with those that have an explicit desire to exploit that information then there is little reason to attempt something as complicated and wasteful as self-hosting your own LLMs.
Thank you for sharing this!