Once a year or so, I re-learn how to interpret Smart values, which I find frustratingly obtuse. Then I promptly forget .
So one’s almost 6 y/o and the other is about 5½?
🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆.
𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍
Once a year or so, I re-learn how to interpret Smart values, which I find frustratingly obtuse. Then I promptly forget .
So one’s almost 6 y/o and the other is about 5½?
I think I see what you’re saying.
B2 has multiple data centers around the world - at least 3 in the US and 1 in EU, that I know of. If you want your data replicated, you have to create buckets in multiple locations and connect them for replication, which they’ll do for you (the replication).
If you’re saying that they don’t automatically store multiple copies of your data in multiple locations for you, for free, you’re right. But they do have multiple data centers located around the world, and you can create multiple buckets and configure them for automatic replication so you have redundancy. You have to pay for the storage at each replicated location, though. If you want a bucket in Sacramento, it’ll cost you those pennies. If you want it replicated to Rest on, you’ll pay double the pennies. If you want it also replicated to Amsterdam, triple the pennies.
I don’t think it’s fair to say that they’re single location that could have a natural disaster and you therefore lose your storage. It’s only like that if you set it up that way, and it’s pretty trivial to set up global replication - it just costs more.
I quadrupal vote for this combination.
You could trust B2 more; maybe dig into their structure. They’re solid, and not only that they provide an awesome service with their yearly HD failure rate evaluations, in which they describe the structure of their data centers.
I’m terms of NPS, I’m on their side. Unless something comes out and shady business practices, I’m brand loyal to B2. Been with then for years, and love the service, pricing, and company.
I mean, someday I’ll get a new TV, and I’d just been assuming I’d leave it disconnected… but I hadn’t thought about the nagware, and that would definitely be an issue.
Hmm. Just curious: did you try creating a tar pit subnet for it, which it could connect to but not escape from?
You duplicated Bluesky’s entry for Nostr. Could you address nostr’s weaknesses? Keeping in mind that as long as you don’t federate with the main Nostr nodes, you won’t be swamped with the CryptoBros - its biggest downside.
It’s been years since I’ve shopped for a TV, but… can’t you just not connect it to the internet? I have a little microPC running Linux connected to our TV; it’s smarter than any other TV I’ve seen, but the TV itself is stupid.
Why can’t someone just get a smart TV and just never let it get online?
I mean, sure, if I had my 'druthers, I wouldn’t be paying for features I don’t use, but if it’s literally impossible to buy dumb TVs, what’s the issue?
There is a project to standardize (and document) the API, called OpenSubsonic. It includes extensions, but the main value is that it tries to consistently document expected behavior. It’s an uphill battle, because the Subsonic API is a schizophrenic mess, and no two servers interpret API responses the same way, but it’s still a decent project. I contribute to a client, and we try to adhere to the OpenSubsonic documentation.
My only criticism about the API is that it’s focused on streaming, which means we can’t consolidate server control (e.g. mpd) and streaming, which would make writing versatile clients easier, but still.
Tempo is a fantastic client, BTW, and has largely replaced my local offline client use.
And that, kids, is a great use of RAID: under some other form of data redundancy.
Great story!
RAID 1 is mirroring. If you accidentally delete a file, or it becomes corrupt (for reasons other than drive failure), RAID 1 will faithfully replicate that delete/corruption to both drives. RAID 1 only protects you from drive failure.
Implement backups before RAID. If you have an extra drive, use it for backups first.
There is only one case when it’s smart to use RAID on a machine with no backups, and that’s RAID 0 on a read-only server where the data is being replicated in from somewhere else. All other RAID levels only protect against drive failure, and not against the far more common causes of data loss: user- or application-caused data corruption.
Yeah, I use systemd for the self-host stuff, but you should be able to use docker-compose files with podman-compose with no, or only minor, changes. Theoretically. If you’re comfortable with compose, you may have more luck. I didn’t have a lot of experience with docker-compose, and so when there’s hiccups I tend to just give up and do it manually, because it works just fine that way, too, and it’s easier (for me).
I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.
Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.
I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.
2¢
Location services in Android are in-phone, and they’re definitely accurate and reporting to Google. I only clarified that your cell provider probably can’t locate you using triangulation via your cell Signal. Turn data off, and you’re fine; otherwise, Google is tracking you - and from what I’ve read, even if you have location services turned off.
They can’t, tho. There are two reasons for this.
Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.
An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.
The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.
TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.
Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.
For my CLI homies, there’s syncedlyrics.
Be advised: several Subsonic servers (including gonic and Navidrome) do not support lyric files unless they’re embedded, and syncedlyrics will only put the lyrics in .lrc files. So getting lyrics in clients can be a two-step process: download the .lrc’s, then run a script to embed them in the song files. I’ve seen a script to do the latter, but I haven’t tried it. I’ll send a patch to gonic to read lrc files, during the Christmas holiday most likely.
Do. The ErgoDox (also from ZSA) comes pretty close, and I’d programmable with their web app. All you need to do is reprogram it to swap the “Y” key, and pop off the key caps and swap those.
Or, do you mean keys in the same order, but only the “Y” key is moved to the other half? Like, next to the “T”? If so, you’re in luck, in a way, because the ErgoDox(en) come with an extra column of keys on the inside; you could program the big key next to the “T” to be “Y”. Then do whatever you want with the spare “Y” key. I think ZSA sends you a couple of extra key caps with the keyboard, so if you really wanted to, you could swap the “Y” out with a blank.
You can choose your switches when you order, IIRC. Mine was no buckling spring, but it clicked just fine.
Seconded. OP, if you can write Markdown, Hugo will turn it into a website.
I think Android updates intentionally made the Pixel C slower. It was a noticeable process, up to the point they stopped supporting it. I’d downgrade to an earlier version, but there’s such poor support in Lineage, I’m barely able to run the version that’s on there now.
Such a shame, because it’s still an amazingly beautiful device.
I’m 100% with you. I want a Light Phone with a changeable battery and the ability to run 4 non-standard phone apps that I need to have mobile: OSMAnd, Home Assistant, Gadget Bridge, and Jami. Assuming it has a phone, calculator, calendar, notes, and address book - the bare-bones phone functions - everything else I use on my phone is literally something I can do probably more easily on my laptop, and is nothing I need to be able to do while out and about. If it did that, I would probably never upgrade; my upgrade cycle is on the order of every 4 years or so as is, but if you took off all of the other crap, I’d use my phone less and upgrade less often.
The main issue with phones like the Light Phone is that there are those apps that need to be mobile, and they often aren’t available there.
Great clarifications!