Pet detection is sorta on the roadmap for 2025… I couldn’t be happier.
+1 for immich, if I didn’t already know I would be doing photo backups it would have been my entry. For things “I didn’t know I needed”
Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.
Raspberry pi3 Docker:- pi-hole, unbound, portainer.
Pet detection is sorta on the roadmap for 2025… I couldn’t be happier.
+1 for immich, if I didn’t already know I would be doing photo backups it would have been my entry. For things “I didn’t know I needed”
My server is full of bindmounts. Too many bind mounts. It causes a host of permissions issues if I’m honest. There wasn’t a storage problem I didn’t solve with bindmounts. Except this one, this one I decided I had to have interact over SMB or some shit.
I remember, I tried to solve it with bind mounts before. I couldn’t figure out why it wasn’t saving to /mnt/important/paperless/… I think when I get to /originals it’s going to look like ./originals/mnt/important/paperless/… Somewhere it’s going to look like that. Urgh
Thank you. With that problem solved Paperless is, currently, perfect for my needs.
Thank you. Setting it up seems less daunting now. I’m going to try for setting up emails.
The android app is fairly functionally complete, and I only interact with my phone or tablet. In fact, for desktop tasks I have a Linux Mint VM I just console into from my tablet, a sort of sudo laptop.
In anycase, for manual uploading files my phone is probably easier. But, your advice is good for everybody that’s not me, sensible people.
Your comment about bindmounts might have solved my biggest problem with Paperless, in that it doesn’t write to my 3-2-1 back up folder directly so I end up 3-2-1ing the whole machine. Which is fine, but I keep multiple snap shots of my LXCs so it’s multiples of multiples.
/zpool/important/paperless:/use/src/paperless/original
Specific file paths aside, would [path to zpool]:[path to originals] have paperless saving the originals to my zpool so I would only have 3 copies instead of 3*#of snapshots?
You’re right, I don’t take advantage of any of these features. I should.
Partly because of lack of know how on my part. So I don’t trust myself to successfully have it log into my email, get what it needs and leave everything else untouched. My main uploads, payslips and bank statements, are behind their own apps too.
Partly because paperless is isolated in it’s own little container (in my setup at least) so access to the consume folder is behind another step, I could syncthing it… I just haven’t.
And partly because I use the android app as my main interaction with Paperless. The app uses my phone as a good-enough scanner.
I do not know. I don’t believe you can provide a share link for a whole tag, just individual documents. I’m not seeing an obvious way of exporting a tag either.
You could run paperless in parallel and syncthing your files into its “consume” folder.
deleted by creator
Sure,
I used TTecks helper script to install paperless as an LXC. I then use proxmox’s inbuilt back up schedule to grab snapshots of that LXC, and others, I usually keep 1 "nightly"and 1 “monthly” right now.
Syncthing, another LXC thank you tteck, has access to the back up folder. It is synced with a RPi 4 pulling double duty as my redundant DNS all installed using Docker. The pi 4 install is synced with my proxmox host and an off-site box, through tailscale at my parent’s house.
There are better systems, like Borg and what not, but this one is mine.
I have an “important” share on a my NAS that is also synced 3-2-1. It would be better if Paperless saved to my NAS directly, then I’d only have 3 copies. Right now I have 6: 1 nightly and 1 monthly spread across 3 machines, not counting RAID because the “b” in “RAID” stands for back up.
My oh shit plan: grab a back up file. Rebuild the lxc from that snapshot. Access my pdfs.
I keep once in a lifetime stuff: birth certificate, paper counter part to my driver’s license, etc. They’re still backed up. But, for day-day communications that I’m supposed to keep: 5 years financials, tennant agreements etc. My old filing system was “throw them in a box, if I remembered and find them never. Or, try not delete the email they’re attached to”. Now I have a glimmer of a hope
Couldn’t tell you, sorry. I have Paperless in it’s own LXC (helper-script) which I 3-2-1 as a machine. Many duplicates, but they’re only PDFs.
I can tell you I spent a small amount of time trying (and failing) to get paperless to save the files onto my NAS. I can also tell you, if I stretched up really tall I can just about scrape rock bottom when it comes to skills in this stuff.
Paperless - Pay slips, Bank statements, MOT records, Insurance policies, User manuals, restaurant menus. All filed and searchable. Letters I get are photographed, uploaded and immediately disposed of, zero stress.
Op I was you 12 months ago. +1 installing proxmox. The ability to make mistakes in an LXCs and always having the nightly back up right there was worth it alone. Helper scripts get you close to where you want to go fast. As for guides, there’s a bunch, raid owl, technotim both have initial proxmox setup guides. There are many like them, just two I remember.
It might just be me, I struggled with every step of every guide I followed, mostly because I skip to copy paste the commands… Don’t do that. Chatgpt, plug the command in there and start quizzing it: “what does this do, what are the flags doing, I want to do x will command work”. Then don’t copy chatgpt either, take its output back to the documentation and make sure it makes sense. Then take a snapshot. Then paste the thing. It at least forced me to slow down.
In the beginning I was about a month, just on a pi, getting a pihole and a servarr installed and configured. Then I nuked it and rebuilt in a couple weeks. Then I messed up again and rebuilt in a couple days. I dedicate 1hr to try fix what I broke using Chatgpt as mentor/rubber duck, if I can’t make progress on a fix in that time I load the snapshot. Troubleshooting is a great skill, however, everything you need gets installed at least once, so get good at installing things. Back ups need testing and you should be familiar with the process, get good at recovering from back ups. Chatgpt solves most of the problems surface level problems. You’ll get to a point when you get stuck chatgpt won’t be any help either, but let gpt get you there quickly.
I genuinely prefer Dockge to Portainer, learn Portainer. As a rule learn the industry standard then migrate. Tonnes of articles and resources for Portainer, almost everyone using Dockge can help you with Portainer, not the other way around. The only difference is when the non-industry standard is specifically made to solve problems you have with the IS, I went with nginx proxy manager over nginx for example. GUIs are nice and I can see things working, unlike pasting a massive config and hoping. Now I have huge compose.yaml stacks for docker that I used to install one by one in Portainer.
Security is hard. Outsource all you can. Your ISP firewall is perfectly serviceable don’t punch holes in it (for now). Tailscale is perfectly serviceable don’t try make your own tunnels (for now). One of my earliest posts was me installing a firewall on my pi, separate from the my router, and then going into a blind panic about punching holes in my firewall. Funny to look back on, my isp firewall is still completely intact, I picked a different path.
Each iteration add one layer of complexity and take easy wins for everything else. I set up pihole bare metal, messed up the unbound install, go again. I used docker starter to set up pihole+unbound, messed up [something]… go again… Prioritise “working” over “perfect”. You don’t know what perfect is anyway. I don’t know what perfect is, but just getting something working teaches me what would be better for next go around. If what you did is “wrong” it’s going to break sooner rather than later so you get to go again. If what you did works forever be happy and enjoy the thing you built.
Oh I forgot. No big updates right before bed, before a big event or when you’re out of the house. I once had an auto updater [watch tower] go off and delete my access to the internet [pihole] before downloading the new image, on my fiancée’s first day off, and while I was at work. I learned a lot about redundancy for essential infrastructure to Facebook that day, rightly so. If you can’t/won’t want to fix broken things right then, don’t be doing stuff that might break things.
All good info. Thank you kindly.
I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.
I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.
Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.
On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.
For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.
For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.
If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.
Momentum really. I’m on NPM now, it works and it’s great. I didn’t put much thought into it. I’m generally happy with npm, it’s mostly just something to learn next and plain nginx made sense.
AI doesn’t need to be “hallucinationless” to be useful. It just needs to make less mistakes than the average creator. Which isn’t that high a bar.
Get a domain and set about moving over to HTTPS with Let’s encrypt and Nginx.
Learn to write an Nginx config. NPM just works so good though.
Fix my permission issues. I have my media zpool on 777 so all the LXCs work and I have to run Libation in a VM as root. I’ve been banging my head against this on and off for a while.
Figure out why paperless isn’t saving to the correct place. Also, figure out where Paperless is saving to.
Containerise Libation.
I give friends and family access to my server via a relay, just a raspberry pi 0 with Tailscale, pihole and nginx on it. I have reasons for going this route. Anyways, get a couple more of those into the wild. Also streamline the process somewhat.
Learn to and create an ACL config for tailscale so I can have services access nothing, users access services, and admins access everything.
On mobile so you’ll have to forgive format jank.
It depends how each image handles ports if C1 has the ports set up as 1234:100 and C2 has the ports set up as 1234:500 then:
service:
gluetun:
ports:
- 1234:100 #c1 - 1235:500 #c2
[…]
Will solve the conflict
Sometimes an image will allow you to edit it’s internal ports with an environment so
service:
gluetun:
ports:
- 1234:1000 #c1 -1235:1234 #c2
c1:
environent:
- UI_PORT=1000
[…]
When both contsiners use the same second number, C1: 1234:80, C21235:80, and neither documents suggest how to change that port, I personally haven’t found a way to resolve that conflict.
A mini pc, a raspberry pi 4, 3*usb HDD (2*8tb mirrored and a 1tb for local back up), some Netgear router, a whole lot of spaghetti.
That’s a shame. TTeck pretty much built my Homelab.
If you’d permit a short quiz. Ntfy is really interesting to me. I would like to send general server updates and didn’t know how to ensure users, just family and friends, get them. I think ntfy could solve that problem, right now I just text and maintain a bookstack document.
I would also like to send user specific notifications though. For example a user requests a show from Jellyseerr, the admin legally obtains said show and uploads it to jellyfin, user then gets notification that the show they requested is available.