Hello World, As many of you have probably noticed, there is a growing problem on the internet when it comes to undisclosed bias in both amateur and professional reporting. While not every outlet can be like the C-SPAN, or Reuters, we also believe that it’s impossible to remove the human element from the news, especially when it concerns, well, humans.
To this end, we’ve created a media bias bot, which we hope will keep everyone informed about WHO, not just the WHAT of posted articles. This bot uses Media Bias/Fact Check to add a simple reply to show bias. We feel this is especially important with the US Election coming up. The bot will also provide links to Ground.News, as well, which we feel is a great source to determine the WHOLE coverage of a given article and/or topic.
As always feedback is welcome, as this is a active project which we really hope will benefit the community.
Thanks!
FHF / LemmyWorld Admin team 💖
It affects the overall credibility rating of the source, how is that the least important thing? They also seem to let it affect the factual reporting rating despite not clearly stating that in the methodology.
This is only true specifically when you’re thinking about it as a great source can’t have its credibility rating lowered. A not great factual source can get a high credibility rating if it’s deemed centrist enough which again is arbitrary based on the (effectively) 1 guys personal opinion.
High Credibility Score Requirement: 6
Example 1
Factual Reporting Mixed: 1
No left/right bias: 3
Traffic High: 2
Example 2
Factual Reporting Mostly Factual: 2
No left/right bias: 3
Traffic Medium: 1
See how weighing credibility on a (skewed) left/right bias metric waters this down? Both of these examples would get high credibility.
That’s a fair point and I did state in my original post that despite my own feelings I’d be open to something like this if the community had been more involved in the process of choosing one/deciding one is necessary and also if we had the bots post clearly call out it’s biases, maybe an explanation of its methodology and the inherent risks in it.
The way it’s been pushed from the mod first without polling the community and seeing the reaction to criticism some of which was constructive is my main issue here really.
The impact either way is slight. I’m sure you could find a few edge cases you could make an argument about because no methodology is perfect, but each outlier represents a vanishingly small (~0.01%) amount of their content. When you look at rigorous research on the MBFC dataset though, the effect just isn’t really there. Here’s another study that concludes that the agreement between bias-monitoring organizations is so high that it doesn’t matter which one you use. I’ve looked and I can’t find research that finds serious bias or methodological problems. Looking back at the paper I posted in my last comment, consensus across thousands of news organizations is just way too high to be explainable by chance. If it was truly arbitrary as people often argue, MBFC would be an outlier. If all the methodologies were bad, the results would be all over the map because there are many more ways to make a bad methodology than a good one. What the research says is that if one methodology is better than the others, it isn’t much better.
Again, I think you make a really good argument for why MBFC and sites like it shouldn’t be used in an extreme, heavy-handed way. But it matters if it has enough precision for our purposes. Like, if I’m making bread, I don’t need a scale that measures in thousandths of a gram. A gram scale is fine. I could still churn out a top-shelf loaf with a scale that measures in 10-gram units. This bot is purely informational. People are reacting like it’s a moderation change but it isn’t – MBFC continues to be one resource among many that mods use to make decisions. Many react as though MBFC declares a source either THE BEST or THE WORST (I think a lot of those folks aren’t super great with nuance) but what it mostly does is say ‘this source is fine but there’s additional info or context worth considering.’ Critics often get bent out of shape about the ranking but almost universally neglect the fact that, if you click that link, there’s a huge report on each source that provides detailed info about their ownership history, funding model, publishing history, biases, and the press freedom of the country they’re in. Almost every time, there are reasonable explanations for the rankings in the report. I have not once ever seen someone say, like, ‘MBFC says that this is owned by John Q. Newspaperman but it’s actually owned by the Syrian government,’ or ‘they claim that they had a scandal with fabricated news but that never happened’. Is there a compelling reason why we’re worse off knowing that information? If you look at the actual reports for Breitbart and the Kyiv Independent, is there anything in there that we’re better off not knowing?
Like I kinda said in my last paragraphs you’ve got fair points that it may be good enough for what it’s being used for here (despite it’s clear biases) since it’s not being used to disallow posts. Although other commenters have said it has a pro-Zionist bias as well which is honestly more concerning than things I’ve pointed out. Haven’t had time to check beyond the ADL one.
Overall my main issue is the community wasn’t really asked if one was desired, which one should be used, how it should be used, etc. Because of that and the lack of good response by the poster I’ve already decided to follow other world news communities instead of this one.