This comment tought me more about PSUs and UPSs than my entire experience in IT in a very concise way. Good one.
This comment tought me more about PSUs and UPSs than my entire experience in IT in a very concise way. Good one.
What’s just HHD then?
He heck is HHD+? Is this some new fangled storage tech I’m too SSD to understand?
Yeah I see. I don’t know if I can help, as I’ve only used caddy outside of podman, as a separate machine, pointing back to my services.
Please confirm for me, the client traffic looks like proxy is the source on the containered services?
I haven’t had that issue with caddy before, but may be I’m using some particular config to make sure it always passes the client IP.
Some services also need a setting to “know” they are behind a proxy and should look for client address in the headers like x-forwarded-for.
Can’t argue with that.
Just googlw it is unfortunate shorthand for “learn it by doing research and troubleshooting”, a skill sadly very scarce. I agree it’s toxic and unhelpful. Guiding people to be better at finding information on their own is the way.
Can confirm, gitlab has a container registry built in, at least in the omnibus package installation.
I think you can use grafana to present vidgets from different dashboards in one.
I use a 2016 Asus Zenbook with integrated intel gpu.
The performance is comparable. The only thing that’s different is latency, obviously, although it’s fairly negligible on LAN, and encoding/decoding sometimes createa artifacts and smudges, but it’s better at higher bandwidth.
My box sits in my closet, so can’t really help much with docker or vm. But I use sunshine server with moonlight client. Keep in mind you can’t fight latency that comes from distance between server and client. I can use 4/5G for turn based or active pause games but wouldn’t try anything real time. On cable my ping is under ms, enough to play shooters as badly as I do these days.
I use AMD for CPU and GPU, and wouldn’t try nvidia if using Linux as sever.
I did use to run a VM in xenserver/xcp-ng and passthrough gpu with a mock hdmi screen plug. A windows 10 vm, ran very well bar pretty crap CPU but I did get around 30fps in 1080p tarkov, sometimes more with amd upscalling. Back then I was using parsec, but found sunshine and moonlight works better for me.
I should also mention I never tried to support multiple users. You can probably play “local” multiplayer with both parsec and moonlight, but any setup that shares one GPU will require some vgpu proprietary fuckery, so easiest is to buy a PC with multiple gpus and assign one to each VM directly.
I think this lead me on the right path: https://community.ui.com/questions/Having-trouble-allowing-WOL-fowarding/5fa05081-125f-402b-a20c-ef1080e288d8#answer/5653fc4f-4d3a-4061-866c-f4c20f10d9b9
This is for edgerouter, which is what I use, but I suppose opensense can do this just as well.
Keep in mind, don’t use 1.1.1.1 for your forwarding address, use one in your LAN range, just outside of DHCP because this type of static routing will mess up a connection to anything actually on this IP.
This is how it looks in my edge os config:
protocols {
static {
arp 10.0.40.114 {
hwaddr ff:ff:ff:ff:ff:ff
}
}
}
10.0.40.114 is the address I use to forward WoL broadcast to.
Then I use an app called Wake On Lan on Android and set it up like this: Hostname/IP/Broadcast address: 10.0.40.114 Device IP: [actual IP I want to wake up on the same VLAN/physical network] WOL Port: 9
This works fine if you’re using the router as the gateway for both VPN and LAN, but it will get messy with masquarade and NAT - then you have to use port forwarding I guess, and it should work from WAN.
I just wanted it to be over VPN to limit my exposure (even if WoL packets aren’t especially scary).
There is a trick you could do to send a WoL packet to a separate IP on the sender network and modify it so it is repreated on the network of the machine you want to wake up.
I can’t find docs on thisb on mobile, but can look for it later.
It can’t work like a typical IP packet routing tho. I’ve only made it work with a VPN connection.
Another thing you can do is ssh to your router and send a WoL packet from there on the machine’s LAN.
It’s generic advice, but check kompose
- it can translate docker compose yml into a bunch of k8s objects, as far as it sensibly can.
The mose issues can come from setting up volumes, since docker has different expectations towards the underlying filesystem.
It does save a bunch of work of rewriting everything by hand.
If you don’t need external calls sip trunk is not needed.
In a hobby it’s easy to get carried away into doing things according to “best practices” when it’s not really the point.
I’ve done a lot of redundant boilerplate stuff in my homelab, and I justify it by “learnding”. It’s mostly perfectionism I don’t have time and energy for anymore.
If you’re the only user and just want it working without much fuss, use a single db instance and forget about it. Less to maintain leads to better maintenance, if performance isn’t more important.
It’s fairly straightforward to migrate a db to a new postgres instance, so you’re not shooting yourself in a future foot if you change your mind.
Use PGTune to get as much as you can out of it and adjust if circumstances change.
I had budget to try xeon d soc motherboard for a smal itx case. Put 64gb ecc ram into it but could hold 128gb. That server will be 8 yo this year. That particular supermicro mb was ment for some oem routerlike 64_86x with 10g ports and remote management. I’m not sure if intel or amd have any cpus in that segment anymore, but it’s very light on wattage if mostly idle/maintaining vms.
One option I’m looking at is to get a dedicated hetzner server, even the auction and lowest grade ‘new’ offerings are pretty good for the price if you account for energy costs and upfront gear cost.
I think it depends. In my limited experience, because I have not tested this thoroughly, most systems pick the first DNS adresses and only send requests to the second if first doesn’t respond.
This has lead at least a couple of times to extremely long timeouts making me think the system is unresponsive, especially with things like kerberos ssh login and such.
I personally set up my DHCP to provide pihole as primary, and my off site IPA master as secondary (so I still have internal split brain DNS working in case the entire VM host goes down).
Now I kinda want to test if that offsite DNS gets any requests in normal use. Maybe would explain some ad leaks on twitch.tv (likely twitch just using the same hosts for video and ads, but who knows).
Edit: If that is indeed the case, I’m not looking forward to maintaining another pihole offsite. Ehhh.
Most commands are the same. They recommend just aliasing docker to podman so you can keep using your old commands.