• sneakyninjapants
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Good to know. I did see a checkbox for “Is this a bot account” in settings but wasn’t aware you could filter them.

    I guess with that in mind, that brings different concerns into view for me. I’m wondering what proportion of this wave of bots have checked that option identifying themselves as such? If they’re good bots they will of course, but I’ve also read through posts of instance operators claiming they’ve gotten thousands of bot signups in hours, which doesn’t seem like good bot behavior to me. Are they likely to identify themselves as bots? Even if they did, would it matter? One example off the cuff, I should be able filter bots from my feed and comments as you say, but what’s stopping them from upvoting / downvoting a specific group of user’s submissions and comments to the top of my hot feed, or upvoting / downvoting by keyword? If that happens en-masse you wouldn’t really be able to say that posts and comments are being ranked or discovered organically based on merit. While this sort of thing I suspect happens often elsewhere, it can serve to control the flow of information based on a single or small group of people’s will(s).

    Considering all of that, I don’t think the sky is falling and certainly not hysterical about it yet, it’s just concerns about why these accounts were created, and what the creators are planning to do with them if anything.

    Edit: Also sorry for the wall of text, I realize you were just trying to give me a solution and I certainly appreciate it.

    • AgreeableLandscape@lemmy.mlM
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I should clarify that the bot account option is only for legitimate bots that serve a community-oriented purpose. Bots that do things like auto-archive websites or suggest alternative links that might improve your experience when viewing YouTube videos for example. The bot tagging system is intended solely for developers of legitimate bots, and to make the user experience with legitimate bots as smooth and user-controllable as possible, it is not intended to prevent undesirable bots because as you pointed out, they can easily just not click that checkbox and claim that they’re normal accounts. Bots have to operate with a reasonable standard to be allowed, so even if a spam or troll bot labels themselves as such, they intrinsically break the rules so they won’t be allowed. It’s up to mods and admins to find and remove undesirable bots and it is not an easy task, especially for sites that are pseudoanonymous and intentionally collect minimal information. We could probably stick a reCaptcha on the signup form and solve almost all the bot problems, but that is extremely abusive to the privacy of legitimate users so most instances probably don’t want to go that route. Maybe an open source picture captcha that is self hosted, but I’m also not sure how effective those are.

      I’m not aware of any instance that permits automated signups, the expectation for a legitimate bot is that a human will create the account and then connect the bot up to it. As Lemmy grows we may eventually need to add more rules to legitimate bots, like they must have an automatic way for users to opt out, provide a way to contact the developer or maintainer should problems with the bot arise, etc, but most instances right now only require that you use the bot tag and have your bot act in a reasonable and actually helpful way. This isn’t an explicit rule that I’ve seen on any instances, but I think there is also an unspoken rule that bots should not vote on any content, not even things they have interacted with and replied to, since the voting system is intended for people, a future update to the API should probably disable voting for bot accounts entirely.