A multimillion-dollar conspiracy trial that stretched across the worlds of politics and entertainment is now touching on the tech world with arguments that a defense attorney for a Fugees rapper bungled closing arguments by using an artificial intelligence program.

  • dogslayeggs@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    1 year ago

    That’s irrelevant. The AI is not licensed to practice law; so if the lawyer didn’t perform any work to check the AI output, then then the AI was the one defending the client and the lawyer was just a mouthpiece for the AI.

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      Yeah I feel like this is the same as if the lawyer had used a crystal ball to decide how to handle a case. If he lied to clients about it or was also selling crystal ball reading services that seems pretty bad.

    • Touching_Grass@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      8
      ·
      1 year ago

      But is it a mistrial if the lawyer uses autocorrect?

      If the lawyer reviewed the output and found it acceptable then how can you argue it was practicing law. I can write an argument I wantm feed it to AI to correct and improve and iterate through the whole thing. Its just a robust auto correct.

      • zaph
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 year ago

        But is it a mistrial if the lawyer uses autocorrect?

        If you’re found guilty because of a typo you’re probably going to have a successful appeal.

        If the lawyer reviewed the output and found it acceptable then how can you argue it was practicing law.

        This could very well be what he has to prove. That the lawyer didn’t do his due diligence and just trusted the ai.

      • dogslayeggs@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        But is it a mistrial if the lawyer uses autocorrect?

        No, that’s a bad question. Autocorrect takes your source knowledge and information as input and makes minor corrections to spelling and suggestions to correct grammar. It doesn’t come up with legal analysis on its own, and any suggestions for grammar changes should be scrutinized by the licensed professional to make sure the grammar changes don’t affect the argument.

        And your second statement isn’t what happened here. If the lawyer had written an argument and then fed it to AI to correct and improve, then that would have the basis of starting with legal analysis written from a licensed professional. In this case, the lawyer bragged that he spent only seconds on this case instead of hours because the AI did everything. If he only spent seconds, then he very likely didn’t start the process with writing his own analysis and then feeding it to AI; and he likely didn’t review the analysis that was spit out by the AI.

        This is an issue that is happening in the medical world, too. Young doctors and med students are feeding symptoms into AI and asking for a diagnosis. That is a legitimate thing to use AI for as long as the diagnosis that gets spit out is heavily scrutinized by a trained doctor. If they just immediately take the outputs from AI and apply the standard medical treatment for that without double checking whether the diagnosis makes sense, then that isn’t any better than me typing my symptoms into Google and looking at the results to diagnose myself.

        • Touching_Grass@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          5
          ·
          1 year ago

          I watched the legal eagle video about another case where they submitted documents straight from an LLM with hallucinated cases. I can agree that’s idiotic. But if there are a ton of use cases for these things in a lot of profession’s that I think these types of incidents might leave people assuming that using it is idiotic.

          My concern is that I think there’s a lot of people trying to convince people to be afraid or suspicious of something that is very useful because they might be threatened either their career or skills are now at risk of being diminished and so they come up with these crazy stories.