

https://www.deadmansswitch.net/ < this looks like it fits the bill
https://www.deadmansswitch.net/ < this looks like it fits the bill
Actually, (and I wasn’t aware of this until you mentioned it, so thank you), it does support serverless connections:
So I think between cloud server, self hosted server and direct IP, OP should be covered.
RustDesk (rustdesk.com) is open source, and similar to TeamViewer, and has paid plans, including a paid self hosted option.
Just to lightly temper your expectations, the OCR isnt perfect, and you may need to add your own tags/text, but its still an awesome system.
At least for paperless, one of the selling points is OCR plus text search. Do you can dump in all your receipts as photos, and then 3 years later, search “lawnmower” and find the receipt for it. (I dont know if this applies for this software, but its very nice in paperless)
Is this a fork of paperless?
I didn’t say you can’t have a GPU, but to me, its wasteful. I keep my jellyfin server off when not in use, and use WoL to start it when its needed.
I have played with local LLMs, and the models I used were unimpressive, but without knowing what the OP has in mind, we cant know how much power it will use. If it just spins up the GPU once a day for 20 minutes, probably okay, you won’t even notice it. But anyone like me who doesn’t already have a GPU in their lab will probably notice it quite clearly on their power bill.
A megacorps server farm is huge, but its also amortised over millions of users, they probably don’t need 1-1 GPU to customers, so the efficiency isnt necessarily bad. (Although at the moment, given megacorps are tripping over themselves to throw compute at LLM training, this may not be true)
Idle is low power, not zero power. And it won’t be idle when its scraping and parsing the sites, so depending on how much scraping its doing, it could be significant non-idle energy usage.
Yeah, absolutely. And running a GPU 24/7 to occasionally search is just a waste of power. I’m not convinced that google and bings AI search makes financial sense either, Google dropped live search (where the results updated as you typed realtime) because it was too expensive, how does LLM search end up cheaper than live search?!
Edit: This is the live search thing: https://searchengineland.com/test-google-updating-search-results-as-you-type-49116 ~~Annoyingly hard to find, and I can’t find the articles on its cancellation, but from memory it was related to expense. ~~
Edit2: Google Instant Search, and the death was blamed on mobile, and wanting to unify the mobile/desktop experience. I do vaguely remember expense being an unofficial/rumored reason, but I can’t back that up.
I personally have zero interest in AI search, if you mean LLM. The fact that it can make stuff up, also means it can miss stuff as well. Neither are acceptable for a search engine.
If you mean some kind of deterministic algorithm for indexing and searching, then maybe.
Also, attempting to crawl sites locally sounds like a great way to get banned from those sites for looking like a bot.
What is openwebzine? Can’t find any info on it.
256gb of ram seems well beyond standard self-hosting, what are you planning on running?!
I did create a fork and MR, and neither used your runner (sorry if that is what spooked you).
Develop local and push remote also let’s you sanitize what is public and what isnt. Keep your half-backed personal projects local, push the good stuff to github for job opportunities.
I think it was when you create a merge request back, that the original repo would then run the forked branch on the original runners.
From what I can tell, its now been much more locked down, so its better, but still worth being careful about.
More discussion: https://www.reddit.com/r/github/comments/1eslk2d/forks_and_selfhosted_action_runners/
The other potential risk is that the github action author maliciously modifies their code in a later version, but that is solved with version pinning the actions.
I can’t find it right now, but there used to be a warning about not self-hosting runners for public repos. Anyone could fork your repo, and the fork would inherit your runners, and then they could change the pipeline to RCE on your runner.
Has that been fixed?
I went to a completely private gitlab instead, with mirroring up to github for anything that needed to be public.
Edit: seems to maybe not be an issue anymore, at the very least it doesn’t seem to affect that repo. Still, for anyone else, make sure forks and MRs can’t cause action to run automatically on your runner, because that would be very bad.
This is my personal opinion, but you should add :
Unless there is a really good reason, don’t rename your project. It only adds confusion, and users will get lost during the transition. It also makes them hesitant to try the new one - “What if they do it again and i get left behind”.
Pihole isnt pi specific either, it still kept the name.
Container overhead is near zero. They are not virtualized or anything like that, they are just processes on your host system that are isolated. Its functionally not much more different to chroot.
Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.
Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.
Data should be volume mounted into the container, and then the host disk can be backed up.
The only app that I’ve had to fight docker on is Seafile, and even that works quite well now.
And self hosting option: https://github.com/storopoli/dead-man-switch