The difference is that I’m talking about the automation creating completely new groupings, most akin to a community on Lemmy, that coordinated across multiple users, in my mind “simultaneously” with the user still agreeing to opt in to inclusion in that group.
There is an alternative way to do this, which would be that the automation groups the posts after posting, however there is a question there about opt-in, will users want to opt existing posts in after the fact?
One way that definitely would be easiest to implement would be if these groupings are essentially threads with a single piece of content as the “start” / “seed” of the thread and the other posts relating to that thread. Regarding opt-in for that I suppose it could be as easy as enabling/disabling “thread seeding”
Very interesting, so just with a tweak to the client you could treat communities as basically (hash)tags instead of forums? I suppose what I’m thinking of amounts to unique tag identifiers that are computer identifiable based on subject matter / content. I know that effectively this is what is going on under the hood of the social graph at the large social media sites, but rather than connecting the content together into transparent collections they instead serve it to individuals through the suggestion engine as part of feeds.
Something both “spontaneous” and somewhat transparent in at least the grouping/collection is what I think would differentiate the feature, but how to defend it from manipulation is a big question. How do you protect algorithm/AI guided curation from AI guided manipulation seeking to maximize placement of content in as many groups/collections as possible? Even a reputation system could just be used to reenforce more advanced content placement techniques.
I guess there is always the big shrug, if it is relevant it is relevant.