oh for sure, it made sense that they wanted to make sure was fixed. Just was super alarming the speed it was advertised that the relay was there!
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.
- 0 Posts
- 97 Comments
reminds me of my first mail server, accidentally set up an open relay and got a lot of abuse reports from mail providers saying they blocked my server due to it. Took forever to get fixed again.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•My Favorite Self-Hosted Apps Launched in 2025English
1·28 days agofully agree, mine isnt accessible to the outside world either but, you never know if something gets missed or somehow a path gets made. would rather not open up that risk
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•My Favorite Self-Hosted Apps Launched in 2025English
2·29 days agoSadly no recommendations, I still use portainer myself
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•My Favorite Self-Hosted Apps Launched in 2025English
4·29 days agowhile docker does have a non-root installer, the default installer for docker is docker as root, containers as non-root, but since in order to manage docker as a whole it would need access to the socket, if docker has root the container by extension has root.
Even so, if docker was installed in a root-less environment then a compromised manager container would still compromise everything on that docker system, as a core requirement for these types of containers are access to the docker socket which still isn’t great but is still better than full root access.
To answer the question: No it doesn’t require it to function, but the default configuration is root, and even in rootless environment a compromise of the management container that is meant to control other containers will result in full compromise of the docker environment.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•My Favorite Self-Hosted Apps Launched in 2025English
7·29 days agoman, arcane looks amazing, I ended up deciding off it though as their pull requests look like they use copilot for a lot of code for new features. Not that I personally have an issue with this but, I’ve seen enough issues where copilot or various AI agents add security vulnerabilities by mistake and they aren’t caught, so I would rather stray away from those types of projects at least until that issue becomes less common/frequent.
For something as detrimental as a management console to a program that runs as root on most systems, and would provide access to potentially high secure locations, I would not want such a program having security vulnerabilities.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•Bad experience on selfhosting nextcloudEnglish
3·2 months agoyou arent the only one. I had suck a painful onboarding process with next cloud from the docker setup to the speed of it to the UI that I just gave up and decided to use a combination of immich and syncthing instead.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•PSA syncthing-fork has changed ownersEnglish
261·2 months agothis entire thing has made me really rethink whether I want to swap to the new repo or not.
Why was there no communication about it. The gplay repo maintainer wasn’t informed of anything, no public notice to anyone was given, just a transfer of the repo and a status issue here explaining it.
Obviously the act is genuine as they were able to keep the original keys but like, this entire system seemed really sketchy.
I’m also not happy with the fact that it seems the first thing they added was removing checksums, but that might be a temp thing.
I also just noticed that it looks like they removed the entire public key for it, which if they had the original private keys using the existing public keys shouldn’t be an issue right?
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•Self hosting Sunday! What's up, selfhosters?English
2·2 months agoOne of my drives crippled itself a few days back, not sure what caused it. Wasn’t able to be resolved without a host restart which was unfortunate. SMART isn’t failing and has been working fine, so I’m chalking it down to a weird Proxmox bug or something.
For sure expected I was going to need to do a rollback on an entire drive after that restart though. Still may have to if it reoccurs.
I have Proxmox Backup Server backing up to an external drive nightly, and then about every 2 or 3 weeks also backup to a cold storage which I store offsite. (this is bad practice I know but I have enough redundancies in place of personal data that I’m ok with it).
For critical info like my personal data I have a sync-thing that is syncing to 3 devices, so for personal info I have roughly 4 copies(across different devices) + the PBS + potentially dated offsite.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•Proxmox Backup Server: Bare Metal vs. Privileged LXC vs. VM?English
3·2 months agodespite recommendations, I run PBS along side the standard server barebone. I don’t store the backups on the same system they are stored to an external drive (which gets an offline copy every once and awhile) but I don’t like the idea of having PBS in a virtual environment, it’s just another layer that could go wrong in a restore process.
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•I am attempting to get into Selfhosting after a shockingly frightening experience, but I am very lost.English
2·3 months agoI haven’t used a guide aside from the official getting started with syncthing page.
It should be similar to these steps though, I’ll use your desktop as the origin device.
- install syncthing on all devices you want to be syncing with
- on your desktop syncthing page, click “add remote device” and add the device ID of your phone(found on your phones syncthing app), you can also add any other device you want to have communications with (you will need to approve this action on the phone as well so be on the lookout for a notification)
- make a backup of your current keepass file just in case these steps shouldn’t cause files to change but, since the end goal is syncing two devices that you have mentioned have differences with files with the same name better safe than sorry
- create a keepass share on one of the devices (the folder path of this file should be wherever your keepass file is stored on your device. If this file is in a folder with a bunch of other files, you may want to move the file to it’s own subfolder or you will end up sharing all of the files in that path)
- under file versioning chose what type of file version control you want. I prefer staggered since it when a remote device changes the file it moves the old file to a folder, and then deletes them according to the settings
- at this point you should double check the name of your mobile devices keepass file name, if its the same as the name of the db on the desktop, you should rename it prior to continuing. Keepass should be able to detect a file conflict and rename it on it’s own but, better safe than sorry.
- share the folder with the device you want to sync it(your phone in this case)
- Your phone should get a notification that a device wants to share something with it. Approve it, be careful not to clear it because it’s a pain in the butt to get that notification back if you accidentally deny or swipe it away, the mobile app isn’t /amazing/ with it’s UI (but it has gotten better)
- once approved configure it to where you wanted the file to be on your mobile device.
- You should be done at this point. Syncthing should be automatically syncing the keepass files between the two
Some things you may want to keep into consideration. Syncthing only operates when there are two devices or more that are online. I would recommend if you are getting into self hosting a server, having the server be the middle man. If you end up going that route these steps stay more or less the same, it’s just instead of sharing with the phone, its sharing with the server, and then moving to the server syncthing page and sharing with the mobile. This makes it so both devices use the server instead of trying to connect to each other. Additionally, if you do go that route, I recommend setting your remote devices on the server’s syncthing instance to “auto approve” this makes it so when you share a folder to the server from one of your devices, it automatically approves and makes a share using the name of the folder shared in the syncthing’s data directory. (ex. if your folder was named documents and you shared it to the server, it would create a share named “documents” in where-ever you have it configured to store data). You would still need to login to the server instance in the case of sharing said files to /another/ device, but if your intent was to only create a backup of a folder to the server, then it removes a step.
Another benefit that using the server middleman approach is that if you ever have to change a device later on down the road, you are only having to add 1 remote device to the server instance, instead of having to add your new device onto every syncthing that needs access to that device.
Additionally, if you already have the built in structure but it isn’t seeming like it is working, some standard troubleshooting steps I’ve found helpful:
- if trying to share between devices, make sure that there is at least two devices that are connected as remote devices active in order to sync
- If above is true, make sure the folder ID’s are the same between both devices. that is how syncthing detects folders that should be sync’d
- If also true, make sure the devices are being seen as online in remote devices. If it isn’t showing as online, the connection is being blocked somewhere, verify you don’t have a firewall or router blocking it somewhere.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•I am attempting to get into Selfhosting after a shockingly frightening experience, but I am very lost.English
2·3 months agoKeepass is a great way of password management, I use keepass as well. I also use syncthing to sync my password database across all devices and then I have the server acting as the “always on” device so I have access to all passwords at all times. Works amazing because syncthing can also be setup so when a file is modified by another device, it makes a backup of the original file and moves it to a dedicated folder (with retention settings so you can have them cleaned every so often). Life is so much easier.
For photo access you can look into immich, its a little more of an advanced setup but, I have immich looking at my photos folder in syncthing on the server, and using that location as the source. This allows me to use one directory for both photo hosting and backup/sync
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•I am attempting to get into Selfhosting after a shockingly frightening experience, but I am very lost.English
3·3 months agoI hard agree with this. I would NEVER have wanted to start with containerized setups. I know how I am, I would have given up before I made it past the second LXC. Starting as a generalized 1 server does everything and then learning as you go is so much better for beginnings. Worst case scenario is they can run docker as the later on containerized setup and migrate to it. Or they can do what I did, start with a single server setup, moved everything onto a few drives a few years later once I was comfortable with how it is, nuked the main server and installed proxmox, and hate life learning how it works for 2 or 3 weeks.
Do i regret that change? No way in hell, but theres also no way I would recommend a fully compartmentalized or containerized setup to someone just starting out. It adds so many layers of complexity.
Pika@sh.itjust.worksto
Selfhosted@lemmy.world•Logitech will brick its $100 Pop smart home buttons on October 15 - Ars TechnicaEnglish
70·3 months ago15% off a logitech device purchase for the complete removal of a 100$ smart switch. that’s a slap to the face “Thank’s for being a customer here’s a coupon you can only use if you continue being a customer”
Pika@sh.itjust.worksto
Fediverse@lemmy.world•I made a Firefox fork with Fediverse integration - Now with standalone FF extension.English
2·5 months agoWoah, you separated it already? that’s insane. Defo checking it out! Cheers!
Pika@sh.itjust.worksto
Fediverse@lemmy.world•I made a Firefox fork with Fediverse integration - Now with standalone FF extension.English
7·5 months agoHonestly, this is a really innovative project. I wish it came in an extension because I feel that is likely your biggest bottleneck for getting people to try it. I don’t think many are going to build a browser from source & then port all their stuff over strictly for the integration. Plus it looks like a primary advertisement for it is that integration, but it also disables a lot of the QoL features that FF has that some don’t have any problem with. Like the fact that Sync is removed as a whole is a major dealbreaker for me, as I do like the feature and I am not concerned about the privacy aspects of having it on.
If an extension version ever releases for the lemmy integration though, I would for sure be looking at that!

I’ve never rebuilt a container, but I also don’t have any containers that are deprecated status either. I swap off to alternatives when a project hits deprecation or abandonware status.
My only deprecated container I currently have is filebrowser, I’m still seeking alternatives and have been for awhile now but strangely enough it doesn’t seem there are many web UI file management containers.
As such though ever since I learned that the project was
abandonedon life support(the maintainer has said they are doing security patches only, and that while they are doing more on the project currently, that could change), the container remains off, only activating it when i need to use it.