Moderation does not matter if the post is made on a comm or instance which favors it cough .ml cough
Bots and brigading are not the issue here. Neither of them were a factor in the post I linked, and they are not a necessary part of the abuse process under discussion.
Yepowertrippinbastards works on a small scale, but it is not inherently scalable. As the fediverse grows, it will become less practical to name and shame bad actors on an individual basis. It also does not matter when the abuse system (preliminary blocklist) can be implemented by any new account.
The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can’t really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
user gets harassed, blocks the harasser
the harasser can still comment on every comment and post of that user, requiring mod and admins to jump in to stop the abuse. With the Bluesky system, users themselves can prevent that.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales.
BlueSky just passed 21 millions users.
Bots and brigading are not the issue here. Neither of them were a factor in the post I linked,
I had a look again at the post.
I first prepared the account by blocking all the moderators and 4 or 5 users who usually call out misinformation posts.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can’t really see what they were talking about), but I’m pretty sure there would be more than 4 or 5 people who would call out about misinformation.
The very nature of the abuse system being described means that anybody who would report it on YPTB or similar comms can only do so once before themselves being blocked and unable to view future posts of that sort.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?
I disagree with you to some extent.
We should try to keep in mind that the fediverse and lemmy will likely grow to larger scales. Any systems and safety measures we implement should take that into account. The block mechanism as you suggest is extremely ripe for abuse at large scale, and relying on mods / admins to combat it will place an unnecessary extra load upon them, if it is even possible.
Interestingly enough, I feel like the current systems require mods/admins to keep an eye at all times, as harassment can happen at any time, and users can’t really protect themselves.
There is a scenario which is exactly the opposite from the one you presented:
BlueSky just passed 21 millions users.
I had a look again at the post.
Would that be enough here? Of course, it depends on the topic of the thread (no link in the post, so I can’t really see what they were talking about), but I’m pretty sure there would be more than 4 or 5 people who would call out about misinformation.
Can’t we use here the same argument other people use about Lemmy being a public forum, and thus the posts being public for everyone except the blocked accounts?