this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.
Just your normal everyday casual software dev. Nothing to see here.
People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.


I haven’t used a guide aside from the official getting started with syncthing page.
It should be similar to these steps though, I’ll use your desktop as the origin device.
Some things you may want to keep into consideration. Syncthing only operates when there are two devices or more that are online. I would recommend if you are getting into self hosting a server, having the server be the middle man. If you end up going that route these steps stay more or less the same, it’s just instead of sharing with the phone, its sharing with the server, and then moving to the server syncthing page and sharing with the mobile. This makes it so both devices use the server instead of trying to connect to each other. Additionally, if you do go that route, I recommend setting your remote devices on the server’s syncthing instance to “auto approve” this makes it so when you share a folder to the server from one of your devices, it automatically approves and makes a share using the name of the folder shared in the syncthing’s data directory. (ex. if your folder was named documents and you shared it to the server, it would create a share named “documents” in where-ever you have it configured to store data). You would still need to login to the server instance in the case of sharing said files to /another/ device, but if your intent was to only create a backup of a folder to the server, then it removes a step.
Another benefit that using the server middleman approach is that if you ever have to change a device later on down the road, you are only having to add 1 remote device to the server instance, instead of having to add your new device onto every syncthing that needs access to that device.
Additionally, if you already have the built in structure but it isn’t seeming like it is working, some standard troubleshooting steps I’ve found helpful:


Keepass is a great way of password management, I use keepass as well. I also use syncthing to sync my password database across all devices and then I have the server acting as the “always on” device so I have access to all passwords at all times. Works amazing because syncthing can also be setup so when a file is modified by another device, it makes a backup of the original file and moves it to a dedicated folder (with retention settings so you can have them cleaned every so often). Life is so much easier.
For photo access you can look into immich, its a little more of an advanced setup but, I have immich looking at my photos folder in syncthing on the server, and using that location as the source. This allows me to use one directory for both photo hosting and backup/sync


I hard agree with this. I would NEVER have wanted to start with containerized setups. I know how I am, I would have given up before I made it past the second LXC. Starting as a generalized 1 server does everything and then learning as you go is so much better for beginnings. Worst case scenario is they can run docker as the later on containerized setup and migrate to it. Or they can do what I did, start with a single server setup, moved everything onto a few drives a few years later once I was comfortable with how it is, nuked the main server and installed proxmox, and hate life learning how it works for 2 or 3 weeks.
Do i regret that change? No way in hell, but theres also no way I would recommend a fully compartmentalized or containerized setup to someone just starting out. It adds so many layers of complexity.


15% off a logitech device purchase for the complete removal of a 100$ smart switch. that’s a slap to the face “Thank’s for being a customer here’s a coupon you can only use if you continue being a customer”


Woah, you separated it already? that’s insane. Defo checking it out! Cheers!


Honestly, this is a really innovative project. I wish it came in an extension because I feel that is likely your biggest bottleneck for getting people to try it. I don’t think many are going to build a browser from source & then port all their stuff over strictly for the integration. Plus it looks like a primary advertisement for it is that integration, but it also disables a lot of the QoL features that FF has that some don’t have any problem with. Like the fact that Sync is removed as a whole is a major dealbreaker for me, as I do like the feature and I am not concerned about the privacy aspects of having it on.
If an extension version ever releases for the lemmy integration though, I would for sure be looking at that!
I think my only real complaint about the deployment of this, is from a security standpoint. The password is hardcoded as “changeme” for the GitLab Runner container. which when run from an automated script like this the script itself doesn’t make the user aware of that. Like the script itself mentions that you should move credentials.txt but it never makes you aware of the hardcoded password.
it would be nice if it prompted for a password, or used a randomly generated one instead of that hardcode


sadly federation doesn’t seem to be supported at least for the lemmy project currently over the tor network, so the instance itself seems to be over the network but actual updates/responses are over the open Web.


I synced immich to authentik post deployment no issue, but I believe my email matched. I don’t recall if I had to configure my user account ontop of the oauth settings or not, I believe it was smart enough to link the same email to the account.
If you are using a VM style deployment you could run a snapshot of the immich server ahead of time then just rollback if it fails. That’s what I do for all services when changing stuff.
I’m in this same boat as well. As someone who ran an XMPP server in the past, then stopped and eventually moved onto Matrix. I have to hard agree, in my experiences, XMPP was so much better administration side than having to deal with matrix, and its quite a bit more fleshed out(not to mention the sheer amount of clients available) Being able to just log into a management panel and have the panel do everything administration wise for me was super nice, instead of having to ask “is this only available via the API or is it available via a client or is this config only”, these types of tools from what I’ve seen don’t really exist for matrix.


I defo agree. Keep the domain for a few years, with the email server up still, but flag any emails from the server so you can go through and unsubscribe/change emails on anything using the old address.


I agree, I set my grandparents doors up on a timer, if its still open at 11 PM it auto closes both doors. I’ve got the ping a few times now saying “emergency door schedule activated” meaning that they were open and had not been closed prior.


I saw this the other day as well when i was looking at filebrowsers github looking into seeing if it had SSO support. It’s a shame really.


chiming in, even excluding self host, I wouldn’t recommend wix, their sites are so bloated and take forever for me to load, and I’ve had Firefox just straight refuse to load pages before that are wix run.


I vet lesser known projects, but yea I do end up just taking credibility for granted for larger projects. I assume that with those projects, the maintainers team with pull access is doing that vetting before they accept a pull.


I’m fully okay with them doing so, but they have to disclose what they’re doing so. Like the fact that the review didn’t disclose that they were an employee is very sketchy to me.
I was curious but good old auto suggest scared me away. It’s concerning when Hestia exploits is one of the suggestions, so I looked into it and saw a few hits. I didn’t investigate them though, I stopped looking then.
Fully agree, but also after an event the extent that CEO did, that’s going to be held over their head for years to come. The easiest way to get it out of the air is stopping the constant engagement that’s encouraging it. Mastodon was a pretty large source of that.
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.