• 8 Posts
  • 262 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2024

help-circle

  • That’s not how it works though.

    I actually agree that registration is silly. It’s trivial to create fake email addresses; all it does is present obstacles and slight privacy implications for legitimate users, while forming an incredibly mild speedbump for malicious users. But, it’s probably not going away, every little tool in the toolbox that can be deployed by overworked volunteer admins against the unending tide of malicious users trying to make their lives more difficult is probably going to get deployed if it is easy to do.



  • No idea about tools although I hope you find something.

    Two related suggestions that will change your life:

    1. Grunt Fund if you are making decisions about equity
    2. Have people estimate the total time for a task, rigidly enforce that every man-hour spent on a project has to be allocated to one of those tasks (including the elusive but vital “oh shit we forgot” task), keep track of the coefficient between the two. It’ll be different for different people sometimes. When estimating a project, have people come up with estimates and then multiply by the coefficient. Be transparent with everyone about this system. It’ll revolutionize your project management life once people get used to it. I tried to find a blog post which explains more detail, but honestly, it’s not complicated, and Google is too shit now to find it.


  • See my other comment. I wasn’t saying at all that Lemmy was a US-only thing, I was just trying to say that that the whole network is probably enough of a niche platform that it’s not worth the substantial effort that would be involved in trying to interfere too much with US users on non-US instances. Big instances in the US, they can fuck with, and so why not (and especially since the Take it Down act is structured to empower individuals to go after them without the government needing to spend resources on it.) Instances outside the US, never mind, we have bigger fish to fry.


  • Oh, I am sure most of Lemmy is outside the US. I was saying that, in general, Lemmy (and even Mastodon) is probably too small and difficult a problem for them to want to attack through any systematic method. I think, if anything, they’ll just surveil and punish individual US-based users as opposed to trying to shut down or block instances outside the US.

    It’s one of the advantages of ActivityPub services. Bluesky will be easy for them to attack at the root and I fully expect them to do so, whereas for truly federated services I think the reaction will be “ah what the hell too much trouble, how much harm can they really do.”


  • No, they will just make server operators liable for obeying any conservative who has an issue with any content there and can make the right format of complaint.

    I suspect that instances outside the US will simply be too small a factor to bother with. Small, scattered opposition that is subject to deliberate trolling and disruption at any scale anyone feels like deploying will simply not be worth bothering with.

    This is all assuming if a big internet-censorship operation starts (which it seems likely that it will). I think it will mainly focus on large based-in-the-US companies which host large services. Notably among them will be Bluesky. The only impact it will have on anything ActivityPub-based is that they will shut down or muzzle some big instances inside the US, and then, the point being made, they will probably move on, leaving instances outside the US to do whatever they want. That’s my prediction.

    Oh, also, Palantir’s surveillance will incorporate people’s comments into their overall dossier on the person, regardless of where their instance is, which means that anyone who maintains a big presence on an ActivityPub network will be putting themselves at person risk of neo-deportation to somewhere they can never get free from. It will still be legal to do, though. Sure.



  • PhilipTheBucket@ponder.cattoSelfhosted@lemmy.worldWhat is Docker?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    10
    ·
    3 months ago

    Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

    So then the whole rest of the world said “Hey Google’s doing that and they’re super smart, we should do that too.” So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don’t really understand.

    So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don’t worry they’re all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who’s not using Docker must be an idiot, which is the primary purpose for technology as far as they’re concerned.

    But anyway in the meantime a bunch of FOSS software authors said “Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they’ll just hit the button and make it go.”

    And so now everyone uses Docker and it’s a pain in the ass to make any edits to the configuration or setup and it’s all in this weird virtualized box, and the “from scratch” instructions are usually out of date.

    The end



  • Yeah. The instant I read “racist, anti-immigrant and anti-LGBTQ+ views in a position of power” I strongly suspected that this is some bullshit.

    IDK man. I’ve heard bad things about carrotcypher before, I have not looked into them one way or another. It’s sort of dicey both ways: I’m paranoid enough to be wary of a moderator who seems right-wing or pro-corporate who seems to be abusing their position, and also paranoid enough to be wary of a sudden hue and cry that a particular moderator needs to be removed because they are “problematic” in poorly-specified ways.

    Just taking a cursory look at it: Mike’s defense of carrotcypher seems pretty credible. He looked at all carrotcypher’s past moderation actions and decided that it all looked fine and explained why with details. The criticism seemed a little unhinged. The one link that I saw at a quick glance was a link to a single one-line reddit comment, saying that it called CNN propaganda when it didn’t, and saying he favored deporting Mahmoud Khalil when he didn’t.

    Then there was a bunch of stuff like:

    this is a wildly disgusting person that you welcome into your space. i know that as a trans woman i cannot trust any fosstodon user while knowing what kind of person you happily let on your staff, whether theyre acting on those beliefs or not. it’s not safe for our mostly queer userbase to talk to your fascist-harboring userbase.

    It would have been much easier to just link to some of the messed-up things, instead of asserting them and getting all upset and using the “I’m queer so don’t you DARE argue with me or you will be a ‘problematic’ person too” card.

    IDK, I’m not decided on carrotcypher specifically and he might be a big POS and I just haven’t seen it yet. I have seen arguments that he removed particular reddit posts that there was no legitimate reason to remove. I just wish there was more light and less heat about why exactly he is a problem. Basing it on “this is stuff he removed that he shouldn’t have, look, links” is way better, in my opinion, than just being loudly upset about it.


  • I know it’s only vaguely related, since they’re not US-funded, but at some point I think it would be hilarious (in a particularly poignant way) if the Lemmy developers’ funding got cut off by the process of the explicitly rabid governments they are fans of finally succeeding at destabilizing the friendly Western countries where they live to the point that NLNet wasn’t funded anymore. As I understand it, NLNet is already facing some headwinds because the friendly liberal elements in EU politics are getting replaced by the same kind of “fuck everyone just give money to rich people and also anyone who disagrees with me dies” elements that Russia likes to give money and social-media-shilling campaigns to support.

    Surely Russia and China will jump to the front and fund basic infrastructure work for the good of everyone, if that happened. They could count on it happening, instead of having to get jobs.

    Surely.


  • if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.

    This is backwards in my opinion.

    What you described is exactly how it works. Everything in the network is potentially badly behaved. You need to put on rate limits, digital signatures for activities back to actors, blocks for particular instances, and so on, specifically because whenever you are talking with someone else on the network, they might be badly behaved.

    In general, it’s okay in practice to be a little bit loose with it. If you get some spam from a not-yet-blocked instance, or you send some server a message which it has a bug and it doesn’t deliver, then it is okay. But, if you’re sending a message which can compromise someone’s privacy if mishandled, then all of a sudden you have to care on a stricter level. Because it’s not harmless anymore if the server which is receiving the message is broken (or malicious).

    So yes, privacy is different. In practice it’s usually okay to just let users know that nothing they’re sending is really private. Email works that way, Lemmy DMs work that way, it’s okay. But if you start telling people their stuff is really private, and you’re still letting it interact with untrusted servers (which is all of them), you have to suddenly care on this whole other level and do all sorts of E2EE and verification stuff, or else you’re lying to your users. In my opinion.



  • Because it is transparently obvious that it’s going to happen.

    If you’re sending your users’ private statuses to an ActivityPub server, and just hoping that it’s going to choose to keep them private according to certain parameters even though that’s not what the spec stays it needs to do, then you’re fucking up. The fact that we know that particular instances of particular software are exposing them is a nice demonstration of the harm, a confirmation that you’re fucking up when you’re doing that, but it’s not really needed. It is the absolutely predictable result of some basic principles of security which, as a security researcher, you should absolutely be aware of.

    I’ve repeatedly explained this. You’ve repeatedly explained your position. We’ve both had our say. You seem addicted to the concept of “winning” the conversation and wanting to just go back and forth. In that case I would really encourage you to state your position again, and I can state mine again, and we can both have fun doing that for a while. Want to? It sounds like a productive use of both of our time. It’s fun, too.

    Edit: Actually, I didn’t even realize you are on fedia.io when I was typing this. You can test for yourself whether mbin does this, too, by coordinating with @Irelephant@lemm.ee. Follow his user, then have him post one of those private statuses, then fetch his user profile via fedia.io from an incognito window and see whether the private statuses show up. I have no idea whether they will, but if I had to guess, I would say it’s better than even odds.


  • Are you hoping to restart our disagreement through sheer passive-aggressiveness? Okay, sure.

    In my view, this is a Mastodon design flaw (or a user-expectation issue or whatever you want to call it.) I already said that, and you’re involved in the unproductive-arguer’s pastime of pretending not to understand that that’s my position, and just aggressively repeatedly reframing things according to your position and hoping I’ll knuckle under to it through sheer force of repetition.

    I’m not super invested in trying to track down each and every software that might manage to expose the “private” statuses in this way. I just know that as things come and go there are guaranteed to be some. If you have an mbin account and Mastodon account, though, we can try a little experiment. I don’t know the outcome, I’m just curious after taking a quick look down the FediDB list and a quick grep through mbin’s source code. You can be the one to responsibly disclose to mbin how their ActivityPub-conforming behavior is a problem, if indeed it turns out that it is, since you seem to be extremely committed to the idea that the model of “vulnerability” needs to be applied to this particular ActivityPub-conforming behavior. Since you’re a security researcher, having that as a CVE you discovered can be an achievement for you. It’s all yours, you can have it.


  • Hm… maybe. The exact nature of the problem in Pixelfed means that anyone with a Pixelfed account on a server which is getting private statuses can choose to follow someone who’s set to “approve followers” and then read all the private statuses. I do see how that’s significantly worse than just the normal lay-of-the-land of the problem, which is a little more random, and laying that out as a little roadmap to read someone else’s private statuses before there’s been a nice responsible length of time for things to get fixed could be seen as worsening the problem.

    The point that I’m making is that anyone who’s posting private statuses to Mastodon and expecting them to stay private is making a bad mistake already. The structure of the protocol is such that they can’t be assured of staying private regardless of what Pixelfed did or even if Pixelfed didn’t exist. They’re getting federated to servers whose behavior is not assured, in a way where a conformant ActivityPub implementation can expose them. People who are posting private statuses need to understand that.

    That whole blog post where the person is talking about her partner writing private statuses, and then the gut-wrenching realization that they were being exposed on Pixelfed… but then the resolution being “Pixelfed fucked up I hate Dansup now” and then continuing to post the private statuses, is wrong. That person’s partner needs to stop treating their private posts on Mastodon that way. The timer for responsible disclosure started circa 2017 or whenever Mastodon decided on how to implement their private statuses. It’s been and gone.

    Like I say, I get the harm-reduction aspect of saying it would have been better if Dansup was a little more discreet about this particularly bad attack vector until a few more days went by for everyone to upgrade. But it hardly matters. There are still server softwares our there that are going to be exposing people’s private Mastodon posts. It’s just how federation between untrusted servers works. Giving people the illusion that if Dan had just been more discreet then this harm would have been reduced is lulling them into a false sense of security, in my view.


  • Maybe I’m wrong, but shouldn’t posts only be insecure if they’re propagated to the insecure instance?

    “Insecure” in this case simply means any server that doesn’t implement Mastodon’s custom handling for “private” posts. With that definition, the answer to your question is yes. It has been mentioned by Mastodon people that this is a significant problem for the ability to actually keep these private posts private in the real world. The chance of it going wrong is small (depending on your follower count) but the potential for harm is very large. I would therefore go further, and say that it’s a very bad thing that Mastodon is telling people that these posts are “private” when the mechanism which is supposed to keep them private is so unreliable.

    https://marrus-sh.github.io/mastodon-info/everything-you-need-to-know-about-privacy-v1.3-020170427.html

    https://github.com/mastodon/mastodon/issues/712

    Is any private post visible to people on servers that the poster doesn’t have followers on?

    It is not. If you’re sufficiently careful with approving your followers, making sure that each of them is on an instance that’s going to handle private posts the way you expect, then you’re probably fine.

    Could I curl the uri of a post thats “private” and get the post’s content?

    If it’s been federated to an insecure server then yes. If not then I think no.