• breakfastmtn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    There’s far less because of server blocks. There are tons of gross servers that are just walled off from everyone else. Mastodon.social blocks a couple hundred servers.

    Every now and then someone will write an article like, ‘I love free speech so I thought I could run a Mastodon server without blocking anyone… boy was I ever wrong.’ There’s some truly vile shit out there.

    • Deceptichum@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      1 year ago

      A couple hundred servers is nothing compared to a couple hundred thousand facebook groups.

      FB removed 73.3 million CSAM in the first 9 months of '22 alone, and that’s only the stuff they bother to catch.

      • 0xtero@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        It’s also a matter of scale. FB has 3 billion users and it’s all centralized. They are able to police that. Their Trust and Safety team is large (which has its own problems, because they outsource that - but that’s another story). The fedi is somewhere around 11M (according to fedidb.org).
        The federated model doesn’t really “remove” anything, it just segregates the network to “moderated, good instances” and “others”.

        I don’t think most fedi admins are actually following the law by reporting CSAM to the police (because that kind of thing requires a lot resources), they just remove it from their servers and defederate. Bottom line is that the protocols and tools built to combat CSAM don’t work too well in the context of federated networks - we need new tools and new reporting protocols.

        Reading the Stanford Internet Observatory report on fedi CSAM gives a pretty good picture of the current situation, it is fairly fresh:
        https://cyber.fsi.stanford.edu/io/news/addressing-child-exploitation-federated-social-media