Social media posts inciting hate and division have “real world consequences” and there is a responsibility to regulate content, the UN High Commissioner for Human Rights, insisted on Friday, following Meta’s decision to end its fact-checking programme in the United States.
And who decides whether content is hateful?
Moderator groups that users can choose between.
That’s my ideal as well.
As long as what’s allowed is not in the hands of the government, I’m happy. If it is, once the leadership changes, those laws don’t look so good.
Content moderators per community guidelines. Why is this so hard?
And who do you select as moderators? Who ensures their moderation is consistent with community guidelines? What are the consequences if they moderate unfairly?
If we are talking platforms, then the employees of that platform. If we are talking federation, then the community and groups leading the communities. The consequences are the same as always. Bans for rule violations, and the freedom we all share to use or not use these platforms.
As long as we’re keeping the government out of it, I’m happy. People need the ability to vote with their feet and use other platforms, and that’s not feasible if the moderation comes from government rules.
Platforms can and will use the law as an excuse to push their agenda. “Oops, that looks like hate speech, it’s out of my hands” to any content they don’t like. A law like that justifies bad behavior and silence of dissent.