Hey all, another moderation topic here 🙀

We’ve all had a lot of discussion about it. Rather than posting my opinion on it, I wanted to share this talk from usenet about what they’ve learned from it.

They talk about the fediverse at the end!

I also want to share these 2 stories:

A study on how usenet learned to deal with spam: https://www.techdirt.com/2020/09/18/content-moderation-case-study-usenet-has-to-figure-out-how-to-deal-with-spam-april-1994/

One of the ways usenet deals with child porn: https://www.cnet.com/tech/services-and-software/clean-news-proposed-as-usenet-censor/

If you watch the video and find any interesting bits, let’s discuss them!

  • paysrenttobirds
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Very interesting, thank you. The Clean News solution sounds like what some US states are planning for age verification, supposedly keeping posters and readers anonymous for the purposes of the general public, but traceable by law enforcement. Maybe I’ve misunderstood, but I’m more comfortable having that kind of system applied voluntarily when posting certain media than by mandate to all users of a platform.

  • Difficult_Bit_1339
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 year ago

    It seems inevitable that some kind of ID system will be needed online. Maybe not a real ID linked to your person but some sort of hard to obtain credential. That way getting it banned is inconvenient and posts without an ID signature can be filtered easily.

    It used to be that spam was fairly easy to detect for a human, it may have been hard to automate the detection of but a person could generally tell what was a bot and what wasn’t. Large Language Models (like GPT4) can make spam users appear to produce real conversations just like a person.

    The large scale use of such systems provide the ability to influence people on a mass scale. How do you know you’re talking to people and not GPT4 instances that are arguing for a specific interest? The only real way to solve this is to create some sort of system where posting has a cost associated with it, similar to how cryptocurrencies use a proof of work system to ensure that the transactional network isn’t spammed.

    Having to perform computationally heavy cryptography using a key that is registered to your account prior to posting would massively increase the cost of such spamming operations. Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you but for a person trying to post on 1000 different accounts it would be a crippling limitation that would be expensive to overcome.

    That would fight spam effectively, it wouldn’t do much to filter content.

    • MomoTimeToDie
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      1 year ago

      Imagine if your PC had to solve a problem that took 5+ seconds prior to your post being posted. It wouldn’t be terribly inconvenient to you

      The problem is, 5+ seconds on what? A low end smartphone? A bitcoin mining rig? your average Joe’s laptop? Anything reasonable for the end user is going to be a minor setback for anyone with the resources to do massive spam operations, and anything challenging for them is going to be a massive interruption to the regular users

      • Difficult_Bit_1339
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        3
        ·
        1 year ago

        I’d have to look at options for proof of work systems to see what is available if I were implementing such a system. I imagine the target would be a mid-range smartphone which would have dedicated hardware to handle cryptographic operations.

        Upon some skimming, Hashcash seems like a candidate solution. It uses SHA-1 hasing which is common enough that most smartphones have dedicated hardware to handle the operations and wouldn’t be at as much of a disadvantage over a PC as it would when using algorithms that don’t have dedicated hardware implementations.