Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Have you used Facebook in the last 5 years?
The UX is godawful. More than half my feed is just random crap suggestions and ads.
Haven’t heard of Hiren’s BootCD in like 15 years. Good to see it’s still around!
It’s worth mentioning that with a large generational gap, the newer low-end CPU will often outperform the older high-end. An i3-1115G4 (11th gen) should outperform an i7-4790 (4th gen), at least in single-core performance. And it’ll do it while using a lot less power.
I think it helps to think of browsing as a basic form of searching. Everything you can do in a browsing context, you can by definition do in a searching context…if the client doesn’t suck. The information needed to browse is embedded in the tags.
So this strikes me as entirely dependent on your client software. A good client should let you browse by tags. You could add Dewey numbers as tags to start with, so you can browse that way if you want, then add any other tags that might be useful (like genres, for example) on top of that.
The only difference with tags in this context is that books will appear in multiple places.
OP must have it set to the lowest compression level. All levels are lossless, but higher compression levels are smaller, at the expense of increased encoding time. Should be half the size or less in general.
Gotcha. Typically lowercase b=bit and uppercase B=Byte, but it’s hard to tell what people mean sometimes, especially in casual posts.
Come to think of it, I messed up the capitalization too. Should be a capital M for mega.
1mbps is awfully low for 1080. Or did you mean megabyte rather than megabit?
Even if they were trustworthy, nothing lasts forever.
Does anyone seriously think Google Play Movies or whatever they call it is going to be around in 50 years? Audible? Spotify?
Unlikely.
I grew up with access to books that were printed before my parents were even born. I doubt your grandkids will be able to say the same. Not if you buy into DRM-infected ecosystems and vendor lock-in, anyway.
The only consolation is that pirates are always one step ahead. But I wouldn’t want to count on that remaining true in 50 years either.
How does it work exactly? From a quick look at the docs, it sounds like everything through the bridge would appear as coming from @web.brid.gy. Is that right? If so, that kind of mucks up the standard behavior of Lemmy. Lemmy allows both users and admins to block entire instances, so aggregating instances into one “mega-instance” effectively breaks that functionality. That’s not good from a UX perspective.
I tried searching for some bridges instances but didn’t have any luck. I guess I’m doing it wrong. Does anyone have a real example of something that works?
I would guess that’s not a hard limit. Maybe they decided to undersell it because many 4TB+ nvme drives are physically larger and/or require heat sinks, so they might not fit. I don’t see any details on their web site though.
Given two drives with the same size, same heat output, and same interface, it shouldn’t make a difference.
It’s pretty common to see fake limits like that on spec sheets. I can definitely put more RAM in my motherboard than is officially supported since higher-capacity DIMMs are out in the same form factor now compared to when the mobo was released.
It’s insane how many things they push as Snaps when they are entirely incompatible with the Snap model.
I think everyone first learns what Snaps are by googling “why doesn’t ____ work on Ubuntu?” For me, it was Filebot. Spent an hour or two trying to figure out how the hell to get it to actually, you know, access my files. (This was a few years ago, so maybe things are better now. Not sure. I don’t live that Snap life anymore, and I’m not going back.)
Can you explain what you mean by “visually lossless”? Is this a purely subjective classification, or is there a specific definition or benchmark you used?
Yes, this is still necessary.
It wouldn’t make sense to put the onus to block every bad instance onto every single user.
Consider the extreme use case, which is obviously CSAM. I rely on my instance admins to handle that for me. If I had to painstakingly block every instance that has poor moderation (or worse), I’d simply stop using Lemmy. The “all” feed would be utterly unusable.
Also, admins need control over what’s in their own database, potentially for legal reasons.
On Mastodon it’s pretty easy. Download the official app and go through the prompts. They should probably have a little note saying “just go with the defaults if you’re not sure” but this shouldn’t be a road block for any normal person. The fact that Mastodon has a standard migration method makes this a low-impact decision.
Lemmy is definitely harder. “Jerboa” doesn’t sound like an official app, and I don’t think you can even create an account in Jerboa. So the first step is finding an instance on the web with no guidance. That’s bad.
I still haven’t joined Matrix because it’s too hard. People say I shouldn’t use matrix.org for various reasons (like bans without warning) but I can’t find an alternative that seems sensible. All the guides I found are basically “you should really host your own, but if you’re too much of a noob, here are some Polish lolicon-themed servers you can join”. If it were possible to sign up without feeling like I’m doing something wrong, I would have many years ago.
The default “Active” sort option does that. Try “Hot” instead.
If you are memory-bound (and since OP’s talking about 192GB, it’s pretty safe to assume they are), then it’s hard to make a direct comparison here.
You’d need 8 high-end consumer GPUs to get 192GB. Not only is that insanely expensive to buy and run, but you won’t even be able to support it on a standard residential electrical circuit, or any consumer-level motherboard. Even 4 GPUs (which would be great for 70B models) would cost more than a Mac.
The speed advantage you get from discrete GPUs rapidly disappears as your memory requirements exceed VRAM capacity. Partial offloading to GPU is better than nothing, but if we’re talking about standard PC hardware, it’s not going to be as fast as Apple Silicon for anything that requires a lot of memory.
This might change in the near future as AMD and Intel catch up to Apple Silicon in terms of memory bandwidth and integrated NPU performance. Then you can sidestep the Apple tax, and perhaps you will be able to pair a discrete GPU and get a meaningful performance boost even with larger models.