• Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    6
    ·
    edit-2
    1 year ago

    The main thing that made Lemmy succeed was structural: no matter how bad an admin team is, you can limit their impact on your experience, by picking another instance.

    The main focus of the text is something else though. It’s what I call “the problem of the witches”.

    Child-eating witches are bad, but so is witch hunting. People are bound to be falsely labelled as witches and create social paranoia, and somewhere down the road what should be considered witch behaviour will include silly things with barely anything to do with witchcraft - such as planting wheat:

    • if you’re planting wheat you’ll harvest it.
    • if you harvest wheat you get straw.
    • if you get straw you can make a straw broom.
    • if you make a straw broom you can fly on the sky
    • conclusion: planting wheat is witchcraft activity.

    However, once you say “we don’t burn witches here”, you aren’t just protecting the people falsely mislabelled as witches (a moral thing to do). You’re also protecting the actual witches - that’s immoral, and more importantly it’s bound to attract the witches, and make people who don’t want witches to go away.

    In other words, no matter how much freedom of speech is important, once you advertise a site based on its freedom of speech you’ll get a handful of free speech idealists, and lots of people who want to use that freedom of speech to say things that shouldn’t be said for a good reason.

    That harmed a lot of Reddit alternatives. Specially as Reddit was doing the right thing for the wrong reasons (getting rid of witches not due to moral reasons, or thinking about its userbase, but because the witches were bad rep). So you got a bunch of free witches eager to settle in whatever new platform you created.

    • Evotech@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Well said, then at some point your platform gets labelled “the witch platform” and non-witches will leave.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It happens before the label. When you start seeing a witch flying on your sky every night, you’re already leaving.

        • JohnDoe@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          11 months ago

          There is another solution. Make it so witches cannot cause harm, everyone gives a little bit to make everything work for everyone.

          We already give things away: money with taxes, certain liberties, information, hours of our lives; how many of those are done with complete intentionality? i.e. could we choose to do something else? I’d rather do something I choose or want to do even if its harmful or less pleasant because it’s something I am privy to instead of not.

          • andrew@radiation.party
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            A gun would help stop those witches from flying in the sky.

            I may be taking this analogy the wrong way.

            • Lvxferre@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              Okay, the gun thing made me laugh.

              But perhaps you aren’t taking the analogy the wrong way?

              A gun is usage of force. And the paradox of tolerance does prescribe the usage of force against “the intolerant”, in a few situations. Not everything is solved by, for example, letting fascists to hang with their friends in McDonald’s. (Except Mussolini. Upside down.)

    • JohnDoe@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      This really sounds like a reformulation (with more accessible language and preferable IMO) of Popper’s Paradox of Tolerance. I have it below for your convenience:

      Less well known is the paradox of tolerance:

      Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. — In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant. (in note 4 to Chapter 7, The Open Society and Its Enemies, Vol. 1)

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Yup - it is, partially, Popper’s paradox of tolerance.

        However there’s a second risk that I mentioned there, that Popper doesn’t talk about: that the mechanisms and procedures used to get rid of the intolerant might be abused and misused, to hunt the others.

        I call this “witch hunting”, after the mediaeval practice - because the ones being thrown into the fire were rarely actual witches, they were mostly common people. You see this all the time in social media; specially in environments that value “trust” (i.e. gullibleness) and orthodoxy over rationality. Such as Twitter (cue to “the main character of the day”), Reddit (pitchfork emporium), and even here in Lemmy.

        [from your other comment] There is another solution. Make it so witches cannot cause harm, everyone gives a little bit to make everything work for everyone.

        It is trickier than it looks like. We might simplify them as “witches”, but we’re dealing with multiple groups. Some partially overlap (e.g. incels/misogynists vs. homophobic people), but some have almost nothing to do with each other, besides “they cause someone else harm”. So it’s actually a lot of work to prevent them from causing harm, to the point that it’s inviable.

    • sugar_in_your_tea
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think this makes way more sense than the OP. The OP seems like a leftist, so that’s probably why they reached that conclusion.

      But I agree, the real solution wasn’t that a certain kind of witch was banned, it was that the various forms of witches could be quarantined. The Lemmy devs had no control over other instances using the platform to host stuff they disagree with (e.g. exploding heads was/is a thing), they did have control over what content was allowed on their hosted instance, which was by default the most popular.

      The thing that saved Lemmy, imo, wasn’t being leftist, but providing a separate space for the most extreme leftists (lemmygrad) and keeping the main instance pretty tame. The leftists could talk about leftist stuff in their instance, the right wing could talk about right wing stuff on their instance, and the main instance had the more moderate people. Sure, some crazy stuff appeared on the main page from time to time, but it was easy to write off as “that weird instance” instead of something that represents the platform as a whole.

      Other projects didn’t have that separation, so the early content was dominated by extreme views from whichever group felt motivated to join, and frequently that was far-right nonsense. With Lemmy, actual communists were quarantined by the nature of federation, and many instances blocked their instance, so there were a lot of places that didn’t have that content that attracted moderates.

      I’m actually working on my own Reddit alternative, and I’m trying to be extra careful in how I approach moderation so I don’t repeat the same mistakes as other alternatives (happy to discuss if interested). Lemmy has done a great job, whether intentional or unintentional, and there are some great lessons to be learned.

      • Lvxferre@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        For the sake of honesty, I need to mention that I’m myself leftist. Communist, in fact. (I even had a Lemmygrad account, and I deleted it for reasons unrelated to political disagreements.) I’m just not willing to play along witch hunting, nor the sort of false dichotomy that has been so common nowadays, regardless of political views.

        And funnily enough, I said the above partially because of Marxism - the ideology (superstructure) is in large part a result of the base, and in this case the base is the platform structure. A unified platform structure will eventually lead to an unified ideology; while the ActivityPub leads to a loosely connected network, not just of instances but of ideologies too.

        The other reason why I said the above was Ruqqus. I was there, and I saw exactly what happened: the platform started rather friendly and wide, then the alt right started seizing control, and everyone else got exiled to Discord - including the developers+admins. It’s basically what I described above: they said “we don’t burn witches here”, and suddenly all VOATfugees shitted the place.

        I’m actually working on my own Reddit alternative […]

        I hope that your platform succeeds. Seriously. Another nail on Reddit’s coffin doesn’t hurt.

        Just for curiosity, do you plan using the ActivityPub protocol, or something similar?

        • sugar_in_your_tea
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          do you plan on using the ActivityPub protocol

          Maybe later, but federation isn’t an initial goal.

          I want a completely distributed system like BitTorrent or IPFS, so all data is stored on user devices instead of centralized servers (might have some servers to help with availability). I want moderation to be distributed as well, but I’m trying to figure out a way that can promote diversity instead of just falling into the hands of whatever group comes first (i.e. if with a voting model), or fracturing into lots of smaller groups (i.e. web of trust).

          I feel moderation needs to be good from the start, so I’m holding off on integrating with other services until I figure that out.

          A unified platform structure will eventually lead to an unified ideology

          Perhaps. Communities help, but the real issue is quality (or at least diversity) of moderation (I.e. the admins of instances until FT mods are chosen). Reddit worked well because it had pretty good moderation where it counted.

          • Lvxferre@mander.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Distributed system? Madman! You’re going a step further! Mad respect for that, seriously. Now I want to see your project to get successful.

            Regarding moderation, did you see this text? I feel like it’s perhaps worth a try; I don’t expect it to devolve into web of trust-like “feuds” as there’ll be always people working as links between multiple groups, but it also prevents the “first come, first served” issue that you mentioned.

            • sugar_in_your_tea
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Great article! I was thinking along these lines, so I’m glad to see a formalized version of it.

              What if participants could automatically block the malicious peer, if they discover that the peer has been blocked by someone the participant trusts?

              That’s essentially what I’m after. Here’s the basic mechanism I’ve been considering:

              1. Users report posts, which builds trust with other users that reported that post
              2. Users vote on posts, which builds trust with other users that voted the same way
              3. Posts are removed for a given user if enough trusted people from #1 reported it
              4. Ranking of posts is based largely on #2, as well as suggestions for new communities
              5. Users can review a moderation log periodically (like Steam’s recommendation queue) to refine their moderation experience (e.g. agree or disagree with reports), and they can disable moderation entirely

              And since content needs to be stored on peoples’ machines, users would be less likely to host posts they disagree with, so hopefully very unpopular posts disappear (e.g. CSAM).

              So I’m glad this is formalized, I can probably learn quite a bit from it.