As long as the AI is capable enough, I don’t see what’s wrong with it, and I understand if Reddit decides to utilize AI for financial reasons. Though I don’t know how capable the AI is, and it is certainly not perfect, but AI is a technology and it will improve over time. If a job can be automated, I don’t see why it should not be automated.
I think it’s a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.
I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.
All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!
If any AI wasn’t biased it would simply produce unintelligible garbage.
That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.
That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.
More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And as long as there’s a human at the other end of an appeal, it should be fine.
All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations
AI is often only trained on neurotypical cishet white men.
Can you back up this claim? Unless you’re just being an assumer, or you expect people to be suckers/gullible/“chrust” you.
What happens when a community of colour is full of people who don’t have the same conversational norms as white people
In this statement alone, there are not one but two instances of a racist discourse:
Conflating culture (conversational norms) with race.
Singling out “white people”, but lumping together the others under the same label (“people of colour”).
You are being racist. What you’re saying there boils down to “those brown people act in weird ways because they’re brown”. Don’t.
What happens when a neurodivergent community talk to each other in a neurodivergent way? Autistic people often get called “robotic”, will the AI feel the same way and ban them as bots?
The reason why autists are often called “robotic” has to do with voice prosody. It does not apply to text.
And the very claim that you’re making - that autists would write in a way that an “AI” would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they’re robotic.
[From another comment] Did you write your comment with chatgpt?
Passive aggressively attacking the other poster won’t help.
Odds are that you’re full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you’re being racist and dehumanising.
The problem is the perverse incentives for “service”. Yes, ideally things that can be automated, should be. But what about when it’s insufficient, or can’t satisfy the customer, or is just worse service. Those cases will always exist, but will the companies provide an alternative?
We’re all familiar with voice menus and chatbots to provide customer service, and there are many cases where those provide service faster and cheaper than a human could. However what we remember is how useless they were that one time, and how much effort it was to escape that hell to talk to someone who can actually help.
If this AI is just better language recognition, or if it makes me type complete sentences, just to point me to the same useless FAQ yet again, I’ll scream
The model-based decision making is likely not capable enough. Specially not for the way that Reddit Inc. would likely use it - leaving it in charge of removing users and content assumed to be problematic, instead of flagging them for manual review.
I’m specially sceptic on the claim in the site that their Hive Moderation has “human-level accuracy”. Specially over time - as people are damn smart when it comes to circumventing automated moderation. Also let us not forget that the human accuracy varies quite a bit, and you definitively don’t want average accuracy, you want good accuracy.
As long as the AI is capable enough, I don’t see what’s wrong with it, and I understand if Reddit decides to utilize AI for financial reasons. Though I don’t know how capable the AI is, and it is certainly not perfect, but AI is a technology and it will improve over time. If a job can be automated, I don’t see why it should not be automated.
Removed by mod
I think it’s a bold assumption to think that AI is often only trained by neurotypical cishet white men, though it is a possibility. I do not fully understand how AI works and how the company trains their AI so I cannot comment any further. I admit AI has its downsides, but AI also has its upsides, same as humans. Reddit is free to utilize AI to moderate subreddits, and users are free to complain or leave reddit if they deem that their AI is more harmful than helpful.
Removed by mod
Nope, just my personality. I think i have grammar mistakes too.
Removed by mod
I admit I might be biased towards AI because I believe AI isn’t biased because it doesn’t have any desire, to sleep, breath, eat, etc. Everyone is capable of critical thinking, the question is, is it good or not? And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one, although it may not be as good as some people.
All AI has to be biased, that bias is the training data and (inherently biased) humans select the training set. Funnily enough, the weights on each node of a neutral net are even sometimes called biases!
If any AI wasn’t biased it would simply produce unintelligible garbage.
https://time.com/5520558/artificial-intelligence-racial-gender-bias/
Removed by mod
AI is also constantly wrong.
ChatGPT lies about science.
ChatGPT lies about history
ChatGPT lies about politics
ChatGPT lies about nonexistent programming libraries
ChatGPT lies about nonexistent legal cases
ChatGPT lies about nonexistent criminal backgrounds
The only time I would trust ChatGPT is when there are no right and wrong answers.
That’s not AI works. It’s exactly as biased as the humans who produced the content on which it is trained.
That said, I also don’t believe these models have been trained exclusively on white straight men’s conversations, that would take some effort to achieve.
More likely, it’s been trained on internet forums, so similar to what it’s being asked to moderate. And as long as there’s a human at the other end of an appeal, it should be fine.
All AI does is look for patterns to complete. You train it on some set of data such as Reddit, which can be biased, and set some sort of feedback for whether it makes the right choice, which can be biased, and find out what patterns it thinks it sees, which may be biased, to apply to new situations
Just checked this with an AI detector and it said human. Bot 1, human 0. This sentance kinda undermined your point for keeping humans only.
Can you back up this claim? Unless you’re just being an assumer, or you expect people to be suckers/gullible/“chrust” you.
In this statement alone, there are not one but two instances of a racist discourse:
You are being racist. What you’re saying there boils down to “those brown people act in weird ways because they’re brown”. Don’t.
The reason why autists are often called “robotic” has to do with voice prosody. It does not apply to text.
And the very claim that you’re making - that autists would write in a way that an “AI” would confuse them with bots - sounds, frankly, dehumanising and insulting towards them. And reinforcing the stereotype that they’re robotic.
Passive aggressively attacking the other poster won’t help.
Odds are that you’re full of good intentions writing the above, but frankly? Go pave hell back in Reddit, you’re being racist and dehumanising.
The problem is the perverse incentives for “service”. Yes, ideally things that can be automated, should be. But what about when it’s insufficient, or can’t satisfy the customer, or is just worse service. Those cases will always exist, but will the companies provide an alternative?
We’re all familiar with voice menus and chatbots to provide customer service, and there are many cases where those provide service faster and cheaper than a human could. However what we remember is how useless they were that one time, and how much effort it was to escape that hell to talk to someone who can actually help.
If this AI is just better language recognition, or if it makes me type complete sentences, just to point me to the same useless FAQ yet again, I’ll scream
The model-based decision making is likely not capable enough. Specially not for the way that Reddit Inc. would likely use it - leaving it in charge of removing users and content assumed to be problematic, instead of flagging them for manual review.
I’m specially sceptic on the claim in the site that their Hive Moderation has “human-level accuracy”. Specially over time - as people are damn smart when it comes to circumventing automated moderation. Also let us not forget that the human accuracy varies quite a bit, and you definitively don’t want average accuracy, you want good accuracy.
Regarding the talk about biases, from another comment: models are prone to amplify biases, not just reproduce them. As such the model doesn’t even need to be trained only in a certain cohort to be biased.