• 0 Posts
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    edit-2
    3 months ago

    My post above is 376 characters, which would have required three tweets under the original 140 character limit.

    Mastodon, for better or worse, has captured a bunch of people who are hooked on the original super-short posting style, which I feel is a form of Newspeak / 1984-style dumbing down of language and discussion that removed nuance. Yes, Mastodon has removed the limit and we have better abilities to discuss today, but that doesn’t change the years of training (erm… untraining?) we need to do to de-program people off of this toxic style.

    Especially when Mastodon is trying to cater to people who are used to tweets.

    Your post could fit on Mastodon

    EDIT: and second, Mastodon doesn’t have the toxic-FOMO effect that hooks people into Twitter (or Threads, or Bluesky).

    People post not because short sentences are good. They post and doom-scroll because they don’t want to feel left out of something. Mastodon is healthier for you, but also less intoxicating / less pushy. Its somewhat doomed to failure, as the very point of these short posts / short-engagement stuff is basically crowd manipulation, FOMO and algorithmic manipulation.

    Without that kind of manipulation, we won’t get the kinds of engagement on Mastodon (or Lemmy for that matter).


  • dragontamer@lemmy.worldtoFediverse@lemmy.worldWhy is Mastodon struggling to survive?
    link
    fedilink
    English
    arrow-up
    120
    arrow-down
    6
    ·
    edit-2
    3 months ago

    Because Threads and BlueSky form effective competition with Twitter.

    Also, short form content with just a few sentences per post sucks. It’s become obvious. That Twitter was mostly algorithm hype and FOMO.

    Mastodon tries to be healthier but I’m not convinced that microblogs in general are that useful, especially to a techie audience who knows RSS and other publishing formats.


  • That’s not what storage engineers mean when they say “bitrot”.

    “Bitrot”, in the scope of ZFS and BTFS means the situation where a hard-drive’s “0” gets randomly flipped to “1” (or vice versa) during storage. It is a well known problem and can happen within “months”. Especially as a 20-TB drive these days is a collection of 160 Trillion bits, there’s a high chance that at least some of those bits malfunction over a period of ~double-digit months.

    Each problem has a solution. In this case, Bitrot is “solved” by the above procedure because:

    1. Bitrot usually doesn’t happen within single-digit months. So ~6 month regular scrubs nearly guarantees that any bitrot problems you find will be limited in scope, just a few bits at the most.

    2. Filesystems like ZFS or BTFS, are designed to handle many many bits of bitrot safely.

    3. Scrubbing is a process where you read, and if necessary restore, any files where bitrot has been detected.

    Of course, if hard drives are of noticeably worse quality than expected (ex: if you do have a large number of failures in a shorter time frame), or if you’re not using the right filesystem, or if you go too long between your checks (ex: taking 25 months to scrub for bitrot instead of just 6 months), then you might lose data. But we can only plan for the “expected” kinds of bitrot. The kinds that happen within 25 months, or 50 months, or so.

    If you’ve gotten screwed by a hard drive (or SSD) that bitrots away in like 5 days or something awful (maybe someone dropped the hard drive and the head scratched a ton of the data away), then there’s nothing you can really do about that.


  • If you have a NAS, then just put iSCSI disks on the NAS, and network-share those iSCSI fake-disks to your mini-PCs.

    iSCSI is “pretend to be a hard-drive over the network”. iSCSI can exist “after” ZFS or BTRFS, meaning your scrubs / scans will fix any issues. So your mini-PC can have a small C: drive, but then be configured so that iSCSI is mostly over the D: iSCSI / Network drive.

    iSCSI is very low-level. Windows literally thinks its dealing with a (slow) hard drive over the network. As such, it works even in complex situations like Steam installations, albeit at slower network-speeds (it gotta talk to the NAS before the data comes in) rather than faster direct connection to hard drive (or SSD) speeds.


    Bitrot is a solved problem. It is solved by using bitrot-resilient filesystems with regular scans / scrubs. You build everything on top of solved problems, so that you never have to worry about the problem ever again.



  • Wait, what’s wrong with issuing “ZFS Scan” every 3 to 6 months or so? If it detects bitrot, it immediately fixes it. As long as the bitrot wasn’t too much, most of your data should be fixed. EDIT: I’m a dumb-dumb. The term was “ZFS scrub”, not scan.

    If you’re playing with multiple computers, “choosing” one to be a NAS and being extremely careful with its data that its storing makes sense. Regularly scanning all files and attempting repairs (which is just a few clicks with most NAS software) is incredibly easy, and probably could be automated.