Yes, it’s not seeing the word, but the shape of the word (and the look of the flower) potentially tells it has some sort of child context and it also sees lewd context in it as well from your other words.
That’s seeing the word. Neural networks can in fact detect words. Have done since the 80s.
This hair-splitting is pointless and bizarre.
I’m just trying to explain how it works and how it’s not actually reading the words you write. You seem to be combative about it for some reason.
The bot sees the whole image, including the flower drawing and the other words. It figures out the weights for lewd and for underage idependently and if the weights exceed some thresholds, it rejects the image.
‘Why would it scan for words?’ ‘No it doesn’t.’ Yeah can’t imagine why this interaction feels tense.
The word parenthood isn’t exceptionally boob-shaped. It’s being caught for its semantic relations. That is an absurdity on the part of the filter: it excludes words vaguely related to children.
Again, it’s not just the word, it’s the image as well, as well as the potential that it roughly matches the shape of the word “parenthood” to children weights.
It’s not scanning for words. It’s a neural network and it’s scanning for potential csam. There’s false positives
Including when it detects words, which apparently it can do without scanning.
Yes, it’s not seeing the word, but the shape of the word (and the look of the flower) potentially tells it has some sort of child context and it also sees lewd context in it as well from your other words.
That’s seeing the word. Neural networks can in fact detect words. Have done since the 80s.
This hair-splitting is pointless and bizarre.
Demonstrably - the robot blocks certain words, as if words could be child pornography.
I’m just trying to explain how it works and how it’s not actually reading the words you write. You seem to be combative about it for some reason.
The bot sees the whole image, including the flower drawing and the other words. It figures out the weights for lewd and for underage idependently and if the weights exceed some thresholds, it rejects the image.
‘Why would it scan for words?’ ‘No it doesn’t.’ Yeah can’t imagine why this interaction feels tense.
The word parenthood isn’t exceptionally boob-shaped. It’s being caught for its semantic relations. That is an absurdity on the part of the filter: it excludes words vaguely related to children.
The weight for any word should be nil.
Again, it’s not just the word, it’s the image as well, as well as the potential that it roughly matches the shape of the word “parenthood” to children weights.