• 0 Posts
  • 7 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle
  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.


  • The messaging around this so far doesn’t lead me to want to follow the fork on production. As a sysadmin I’m not rushing out to swap my reverse proxy.

    The problem is I’m speculating but it seems like the developer was only continuing to develop under condition that they continued control over the nginx decision making.

    So currently it looks like from a user of nginx, the cve registration is protecting me with open communication. From a security aspect, a security researcher probably needs that cve to count as a bug bounty.

    From the developers perspective, f5 broke the pact of decision control being with the developer. But for me, I would rather it be registered and I’m informed even if I know my configuration doesn’t use it.

    Again, assuming a lot here. But I agree with f5. That feature even beta could be in a dev or test environment. That’s enough reason to know.

    Edit:Long term, I don’t know where I’ll land. Personally I’d rather be with the developer, except I need to trust that the solution is open not in source, but in communication. It’s a weird situation.




  • I’m just going to give you props. I have worked in Managed IT Services for a dozen years and some of the worst clients are construction, engineering and architects who use solidworks, autodesk and archicad products.

    You’ve eaten humble pie and admitted that using computers as a tool, and systems design are different and though you might understand a lot, just like I can build a 3d model, the devil is in the detail.

    Building robust solutions that meet your business continuity plans, disaster recovery plans, secure your data for cyber risk and to meet ISO and yet are still somehow usable in a workflow for end users is not something you just pick up as a hobby and implement.

    The way I handle technology Lifecycle is in 5 steps: strategy, plan, implement, support, maintain. Each part has distinct requirements and considerations. It’s all well and good to implement something but you need to get support when it goes wrong or misbehaves. You need to monitor and report for backups, patching, system alerts. Lots of people might do the implement, but consider the Lifecycle of the solution.

    People do these things at home but they’re home labbing, they’re labs. Production requires more.

    Anyway a bunch of people closer to your part of the world will probably help you out here.

    I just want to again recognise and compliment you on realising and openly saying you want help rather than just do the usual “oh I know best” that I hear over and over usually just before someone gets ransomed on their never patched log4j using openssl heartbleed publicly exposed server infrastructure.