• Bluescluestoothpaste
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    7
    ·
    1 year ago

    Yeah but there were admins spying what you did and banning you. Quite frankly i have much greater trust in AI admins than human admins. Not that some human admins aren’t great, but why risk it? Same as self driven cars, as soon as they’re ready im ready to never drive again.

    • nanoUFOOPM
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      1 year ago

      You trust a billion dollar company with no morals with your data? Isn’t that the whole point we are on this site? Community servers are like lemmy instances.

      • Bluescluestoothpaste
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        Sure, and they can have AI moderators in lemmy instances. Whatever problems are concerning about corporate AI admins also apply to corporate human admins.

        • nanoUFOOPM
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yeah now they have everyones open mic too.

    • ☆Luma☆@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      What is stopping AI from showing bias here? The humans tailor the AI, so there will inherently always be that risk without transparency.

      • Bluescluestoothpaste
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Oh sure there’s definitely bias in AI, same as selfdriving cars. They make mistakes, but make far fewer than humans.

        • ☆Luma☆@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Sure, but the mistakes aren’t the main issue, it’s that AI is just a tool that by extention can be abused by the humans in control. You have no idea what rules they give it and what false positives result from it.

          My primary concern here is that it’s Blizzard, whom love to gargle honey for China and is all for banning players that speak against them, is in charge of this AI.

          Blizzard’s previously talked about using AI to verify reports of disruptive voice chat, which is now running in most regions, though not globally. The developer says it has seen this technology “correct negative behavior immediately, with many players improving their disruptive behavior after their first warning.”

          Great, they can auto-ban players like Ng Wai Chung, I guess. For whatever they subjectively deem ‘harmful’. There’s also the looming idea that a friend can wander in my room, say something dumb, and now I’m closer to a ban because of an unrelated choice I made outside the game.

          And we definitely trust Blizard to be good with all the audio data they get to harvest. That won’t be abused later, right?

          • Bluescluestoothpaste
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I mean that’s a general argument against technology. Yes, more technology means more ruthlessly efficient abuse, but ultimately you think technology is better in the long run or not. Either way it is inevitable. Maybe in the EU they will ban those abuses, in China they won’t, and US will find some weird compromise between the two.