

Indexers and downloaders are distinct for newsgroups.
Public indexers are not good for Linux isos, you need a paid service now. They’re cheap and well worth it. Easynews and nzbgeek are good ones.


Indexers and downloaders are distinct for newsgroups.
Public indexers are not good for Linux isos, you need a paid service now. They’re cheap and well worth it. Easynews and nzbgeek are good ones.


Cgroups is not a really a security feature (from what I understand). It is about controlling process priority, hierarchy, and resources limiting (among other things).
With respect, I think you misunderstand what gvisor does and containerization in general. cgroups2 is the isolation mechanism used by most modern Linux containers, including docker and lxc both. It is similar to the jail concept in BSD, and loosely to chroot. It limits child process access to files, devices, memory, and is the basis for how subprocesses are secured against accessing host resources without the permission to do so.
Gvisor adds more layers of control over this system by adding a syscall control plane to prevent a container from accessing functions in the host’s kernel that might not be protected by cgroups2 policy. This lessens the security risk of the host running a cutting-edge or custom kernel with more predictable results, but it comes with caveats.
Gvisor is not a universally “better” option, especially for homelab, where environment workloads vary a lot. Gvisor comes with an IO performance penalty, incompatibility with selinux, and its very strength can prevent containers from accessing newer syscalls on a cutting edge host kernel.
My original comment was that ultimately, there is no blanket answer for “how secure is my virtualization stack”, because such a decision should be made on a case-by-case basis. And any choice made by a homelabber or anyone else should involve some understanding of the differences between each type.


Subjective to security practice. There are more appropriate factors than blanket statements on a technology’s inherent “security” when deciding the format and shape of virtual software spaces.
in a memory safe language
Ultimately, the implementation is more important than the underlying code when it comes to containers. cgroups2 works the same for gvisor as it does for LXC.


I’ve tried it. It performs poorly.
For context, I’ve also been using ZFS since Solaris.
I was wrong about compression on datasets vs pools, my apologies.
By “almost no impact” (for compression), I meant well under 1% penalty for zstd, and almost unmeasurable for lz4 fast, with compression efficiency being roughly the same for both lz4 and zstd. Here is some data on that.
Lz4 compression on modern (post-haswell) CPUs is actually so fast, that lz4 can beat non-compressed writes in some workloads (see this). And that is from 2015.
Today, there is no reason to turn off compression.
I will definitely look into the NFS integrations for ZFS, I use NFS (exports and mounts) extensively, I wonder what I’ve been missing.
Anyway, thanks for this.
With respect, most of this comment is wrong.
Also remember that many permissions like nfs export settings are done on a per filesystem basis
OK, well it’s not harming anything, so if you’re game to learn, by all means.
When you look at traffic on a public interface, besides learning what to filter out that is just normal (probes, crawls, etc from legit sources), but you also will run into badly-formed TCP traffic:
Martian packets: https://en.wikipedia.org/wiki/Martian_packet IP spoofing: https://en.wikipedia.org/wiki/IP_address_spoofing (I used to have a better resource for this,I’ll try to find it)
How RPC works: https://pentest.co.uk/labs/research/researching-remote-procedure-call-rpc-vulnerabilities/
That should help clarify a lot of what you’ll see in traffic on your segment.
You may also want to briefly read about how CDNs work, you’ll see a lot of akamai and cloudflare traffic too.
Running suricata on your wan interface is just generating a ton of noise and will be really confusing for you if you haven’t reviewed packet inspection alerts before. Not a lot of value in it unless you have many users “phoning home”.
Just run it on the lan interface.
Your approach of deny all until something complains is pretty much the most solid way to get a grip on security.
I assess and recommend security practices for a living, and I would say the most important first step is understanding where your data lives and where it goes. Once you know that, the rest is relatively easy with the tools available to us.
Op is running suricata
Caliber web isn’t two separate applications, it’s a calibre-compatible database served via http. There is no desktop “calibre” involved.
There is integrated koreader sync, though.


deleted by creator
That’s still true, but performance has changed a lot since Jim Salter wrote that. There was a time When 2x mirrored vdevs (the equivalent to raid 10) would have been preferable to raidz2, but performance of both ZFS and disks themselves has improved enough that there wouldn’t be much of a difference in performance between these two in a home lab.
Personally, I agree with you in that mirrors are preferable, mostly because I don’t really need high availability as much as I want an easier time restoring if a disk fails.


I’ll ask your mom.


Most fiber services register the sfp/sfp+ module. it is much cheaper, easier and usually not against the terms of service to just use the isp-provided sfp in your own routing device instead of messing with OLT settingw and custom firmware on a $160 WAS.


The logo is bad. “Dogshit” is appropriate here.


Enshitification happens.
I don’t think that’s a given necessarily, I think it’s a common pattern under the vc funding -> IPO model.
But companies like Steam and Patagonia show that companies don’t all have to follow the same predictable enshittification arc.
Wow, there’s a lot going on in there.


Yes, that’s what I get from that as well.
I guess as long as users get some options for import/export/backup then it isn’t that bad. I’m reading over the docs again and I don’t think it’s as bad as I initially read into it.
This project would benefit from some documentation curation.
Edit: which I suppose I could offer to help with to put my money where my mouth is.


Sigh…
That stupid way of explaining the license plan aside, are we again having to explain that we don’t want our data locked into yet another db format?
I used to do that until about 2015.
Even private trackers don’t come close to the coverage of newsgroups. Plus, nzb has the concept of releases, so you don’t have to guess at the quality.
I don’t have an issue with paying, I have an issue with paying for something I don’t want.