This is the kind of bullshit I don’t have time for, when shit gets broken in userspace because someone wanted to change the location of something.
This is the kind of bullshit I don’t have time for, when shit gets broken in userspace because someone wanted to change the location of something.
Can’t wait until this guy DDoS’s his home connection and his ISP calls wondering why tf his bandwidth skyrocketed overnight because of hotlinkers.
So, I migrated to 5.x and I don’t know if it was just me, or a change in the WebUI or something, but Sonarr stopped wanting to pull files in. I’ve been holding out on the Sonarr upgrade because last I looked at it, it wouldn’t auto-migrate you over, etc.
But when I went to upgrade it - it said that now auto-migrates, and it does. However, the old migrated rules looked kinda dirty, so I was panicking a little. The imported/converted stuff all worked, mind you, I just didn’t like how they looked. In the end, I ended up really really liking the new Sonarr system, though I did have to ask an LLM how to format some new regex.
You should REALLY update…
Are you hard-linking it to somewhere else on the drive via any kind of automation?
For example, Sonarr can hard-link files to the directories they belong in, so that Qbit can continue seeding. If you then delete/remove the torrent/files – then the hard link would still be there.
Why would you even bother trying to run this all through a VM when you can just run it directly? If you’re to the point of using VMs, you don’t need this tutorial anyways.
Are you seriously telling me you’re jumping through all the hoops to spin up a VM on Linux, and then doing all the configuration for GPU passthrough, because you can’t just figure out how to run it locally?
If your “FIRST STEP” is to choose an OS: Fuck that.
You should never have to change your OS just to use this crap. It’s all written in Python. It should work on every OS available. Your first step is installing the prerequisites.
If you’re using something like Continue for local coding tasks, CodeQwen is awesome, and you’ll generally want a context window of 120k or so because for coding, you want all the code context - or else the LLM starts spitting out repetitious stuff, or can’t ingest all of your context so it’ll rewrite stuff that’s already there.
That’s true of a lot of spaces who have been historically discriminated against. They become so hyper-aware of any criticism, that they immediately think anyone who has an experience different than their own is “the enemy”.
That way, if the VPN goes down, your torrent client isn’t just downloading stuff nakedly.
You always just bind the torrent client to the VPN adapter so this doesn’t happen. Most modern clients have this (qBittorrent certainly does)
So maybe like some sort of list of computer instructions – which tells the computer to generate a map, and then tabulates the data and presents it to the user like…
If only we had a term for this…
Like algoism, or arithmos… something to do with calculation or something…
Is the router flashable with OpenWRT? :D
jkjk – most modern routers can be turned into just flat access points, ganged with another router.
The router is going to give you more control.
A typical refrigerator is like 40dbA – 25dbA is ABSURDLY quiet. You’re not gonna hit that without a completely fanless system. If 25dbA is his hard cap, he can’t even be breathing in the same area as the computer, because that’s something like 28dbA…
I mean we just had https://nvd.nist.gov/vuln/detail/CVE-2024-6387 – so my guess is that you’re updating quite often to be so confident in your unattended upgrades.
Those were statements. Statements of fact.
Once the models are already trained, it takes almost no power to use them.
Yes, TRAINING the models uses an immense amount of power - but utilizing the training datasets locally consumes almost nothing. I can run the llama 7b set on a 15w Raspberry Pi for example. Just leaving my PC on uses 400w. This is all local – Nothing entering or leaving the Pi. No communication to an external server, nothing being done on anybody else’s server or any AWS instances, etc.
imho - never expose that shit anyways, and VPN into your local network first. Only thing I ever expose to the internet is 80/443.
At the very least, if you’re going to expose an SSH session to the internet, set up some sort of port-knocking. It’s security by obscurity, sure - but it will keep all but the most ardent intruders out.
Once the model is trained, the electricity that it uses is trivial. LLMs can run on a local GPU. So you’re completely wrong.
Weird that applies to some people and not others…I know plenty of people who’s opinions were rejected by society, and I’m told every day that I must accept their opinions…
I’ve heard this same argument used against LGBT individuals, Palestinians, Jews, etc…
So what makes this guy less deserving? Because he has a contrary opinion to you? The things being deleted are not Nazi propaganda. They aren’t even hateful. They’re just discussions that people disagree with…and because they’re disagreed with, they get removed. Mods should not be removing things due to editorial opinion. If people don’t like it, that’s what the downvote button is for.
Absolutely. But I think it might be more advanced than that. They might have some sort of analytics that measures how long people stay on the page, etc to inform their purchasing decisions.
DDNS with Namecheap is as simple as hitting a URL with a /GET request from the IP you want it to point to. No limitations. No special requirements.
I’d rather them become a not-for-profit instead of a non-profit.