• TheMurphy@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    7
    ·
    edit-2
    11 months ago

    ITT: People who are scared of things they don’t understand, which in this case is AI.

    In this case, the “AI” program is nothing more than pattern recognition software setting a timestamp where it believes there’s something to be looked at. Then an officer can take a look.

    It saves so much time, and it filters out anything irrelevant. But be careful because it’s labelled “AI”. Scarry.

    EDIT: Comments to this comment confirms that you don’t understand AI, because if you did, you’d know that this system who scans video is not a LLM (large language model). It’s not even the same system in its core.

    • Voroxpete
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      edit-2
      11 months ago

      This is an astonishingly bad take.

      Almost every AI system is a black box. Even if you open source the code and the training data, it’s almost impossible to know anything about the current state of a machine learning model.

      So the entire premise here is that a completely unaccountable system - whose decisions are basically impossible to understand or scrutinize - gets to decide what data is or isn’t relevant.

      When an AI says “No crime spotted here”, who gets to even know that it did that? If a human is reviewing all of the footage, then why have the AI? You’re doing the same amount of human work anyway. So as soon as you introduce this system, you remove a huge amount of human oversight, and replace it with decisions that dramatically affect human lives - that could potentially be life or death if it’s the difference between a bad cop being taken off the street or not - being made by a completely unaccountable system.

      Whose to say if the training data fed into this system results in it, say, becoming effectively blind to police violence against black people?

      And if that doesn’t scare you, it absolutely should.

      • Misconduct@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        4
        ·
        edit-2
        11 months ago

        It’s not impossible to understand or scrutinize. They give it specific things to look for. It does what it’s told. You can make the argument that ANY tool used by the police will be misused in their favor. AI isn’t special for that by any means. It’s not like we bother to hold anyone accountable for anything else now anyway. Maybe the AI will be less biased

        It’s definitely not doing the same work as a human if humans are spared sifting through hours upon hours of less useful footage. I’m sure they’re testing it etc. Nobody goes all in on this stuff. Really, you guys can be so very dramatic lol

    • Killing_Spark@feddit.de
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      It’s also potentially skipping some of the parts that should be looked at. It depends on the training set.