• mindbleach
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    3 days ago

    … what illegal images are you going to catch, scanning for words?

    • db0@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      It’s not scanning for words. It’s a neural network and it’s scanning for potential csam. There’s false positives

      • mindbleach
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        Including when it detects words, which apparently it can do without scanning.

        • db0@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          3 days ago

          Yes, it’s not seeing the word, but the shape of the word (and the look of the flower) potentially tells it has some sort of child context and it also sees lewd context in it as well from your other words.

          • mindbleach
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            3 days ago

            That’s seeing the word. Neural networks can in fact detect words. Have done since the 80s.

            This hair-splitting is pointless and bizarre.

            Demonstrably - the robot blocks certain words, as if words could be child pornography.

            • db0@lemmy.dbzer0.com
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              3 days ago

              That’s seeing the word. Neural networks can in fact detect words. Have done since the 80s.

              This hair-splitting is pointless and bizarre.

              I’m just trying to explain how it works and how it’s not actually reading the words you write. You seem to be combative about it for some reason.

              The bot sees the whole image, including the flower drawing and the other words. It figures out the weights for lewd and for underage idependently and if the weights exceed some thresholds, it rejects the image.

              • mindbleach
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                3 days ago

                ‘Why would it scan for words?’ ‘No it doesn’t.’ Yeah can’t imagine why this interaction feels tense.

                The word parenthood isn’t exceptionally boob-shaped. It’s being caught for its semantic relations. That is an absurdity on the part of the filter: it excludes words vaguely related to children.

                The weight for any word should be nil.

                • db0@lemmy.dbzer0.com
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  3 days ago

                  Again, it’s not just the word, it’s the image as well, as well as the potential that it roughly matches the shape of the word “parenthood” to children weights.