

The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
The above post says it has support for Ollama, so I don’t think this is the case… but the instructions in the Readme do make it seem like it’s dependent on OpenAI.
Are you saying that NAT isn’t effectively a firewall or that a NAT firewall isn’t effectively a firewall?
Is there a way to use symlinks instead? I’d think it would be possible, even with Docker - it would just require the torrent directory to be mounted read-only in the same location in every Docker container that had symlinks to files on it.
Depending on setup this can be true with Jellyfin, too. I have a domain registered, use dynamic DNS, and have Traefik direct a subdomain to my Jellyfin server. My mobile clients are configured using that. My local clients use the local static IP.
If my internet goes down, my mobile clients can’t connect, even on the LAN.
Under notes, where you said my name, did you mean “Hedgedoc?”
local docker hub proxy
Do you mean a Docker container registry? If so, here are a couple options:
Giphy has a documented API that you could use. There have been bulk downloaders, but I didn’t see any that had recent activity. However you still might be able to use one to model your own script after, like https://github.com/jcpsimmons/giphy-stacks
There were downloaders for Gfycat - gallery-dl supported it at one point - but it’s down now. However you might be able to find collections that other people downloaded and are now hosting. You could also use the Internet Archive - they have tools and APIs documented
There’s a Tenor mass downloader that uses the Tenor API and an API key that you provide.
Imgur has GIFs is supported by gallery-dl, so that’s an option.
Also, read over https://github.com/simon987/awesome-datahoarding - there may be something useful for you there.
In terms of hosting, it would depend on my user base and if I want users to be able to upload GIFs, too. If it was just my close friends, then Immich would probably be fine, but if we had people I didn’t know directly using it, I’d want a more refined solution.
There’s Gifable, which is pretty focused, but looks like it has a pretty small following. I haven’t used it myself to see how suitable it is. If you self-host it (or something else that uses S3), note that you can use MinIO or LocalStack for the S3 container rather than using AWS directly. I’m using MinIO as part of my stack now, though for a completely different app.
MediaCMS is another option. Less focused on GIFs but more actively developed, and intended to be used for this sort of purpose.
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.
I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.
After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.
QMK has pretty extensive documentation that I’m pretty sure covers this. Have you read Tap-Hold Configuration Options?
upper capacity
There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.
procedure
You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.
I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.
Basically, what I’d do is:
In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.
Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.
I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.
Good to know! I saw that mentioned on some (apparently outdated) Comodo marketing copy as a benefit over LE
EV certs give you an extra green bar or something along those lines. If your customers care about it, then you have to. If they don’t - and they probably don’t - it’s a waste.
What exactly are you trusting a cert provider with and what are the security implications?
End users trust the cert provider. The cert provider has a process that they use to determine if they can trust you.
What attack vectors do you open yourself up to when trusting a certificate authority with your websites’ certificates?
You’re not really trusting them with your certificates. You don’t give them your private key or anything like that, and the certs are visible to anyone navigating to your website.
Your new vulnerabilities are basically limited to what you do for them - any changes you make to your domain’s DNS config, or anything you host, etc. - and depend on that introducing a vulnerability of its own. You also open a new phishing attack vector, where someone might contact you, posing as the certificate authority, and ask you to make a change that would introduce a vulnerability.
In what way could it benefit security and/or privacy to utilize a paid service?
For most use cases, as far as I know, it doesn’t.
LetsEncrypt doesn’t offer EV or OV certificates, which you may need for your use case. However, these are mostly relevant at the enterprise level. Maybe you have a storefront and want an EV cert?
LetsEncrypt also only offers community support, and if you set something up wrong you could be less secure.
Other CAs may offer services that enhance privacy and security, as well, like scanning your site to confirm your config is sound… but the core offering isn’t really going to be different (aside from LE having intentionally short renewal periods), and theoretically you could get those same services from a different vendor.
You can get wildcard certs with LetsEncrypt (since 2018): https://community.letsencrypt.org/t/acme-v2-production-environment-wildcards/55578
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.