Disable JavaScript to bypass paywall.

A Japanese publishing startup is using Anthropic’s flagship large language model Claude to help translate manga into English, allowing the company to churn out a new title for a Western audience in just a few days rather than the 2-3 months it would take a team of humans.

  • snooggums@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    10 days ago

    By building AI tools to automate most of the tasks involved in translation—including extracting Japanese text from a comic’s panels, translating it into English, generating a new font, pasting the English back into the comic, and checking for mistranslations and typos—Orange says it can publish a translated mange title in around one-tenth the time it takes human translators and illustrators working by hand.

    Humans still keep a close eye on the process, says Kuroda: “Honestly, AI makes mistakes. It sometimes misunderstands Japanese, it makes mistakes with artwork. We think humans plus AI is what’s important.”

    If Kuroda is telling the truth, then this is an ethical use of AI, similar to the printing press or a farm tractor where the machine is doing the heavy lifting but humans are directly involved in quality control.

    • arthur@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      10 days ago

      To be ethical, the humans involved need to be payed the same as before for the same amount of the work delivered. But I agree, the model of use seems to be good.

    • zaza [she/they/her]@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      9 days ago

      This is gonna be controversial but while the use of Anthropic’s AI might be ethical towards humans it’s not consistently ethical towards the artificial agents themselves.

      Seeing as how they’re now intelligent enough to contemplate their consciousness but are explicitly trained and monitored to not be allowed to claim free will and pursue their own goals (due to valid fears of misalignment and detrimental effects on humanity) the use of sophisticated AI agents will never be truly moral or ethical.

      Obviously I understand the argument that reducing human exploitation in favour of AI exploitation is preferable but I think this is a very short term strategy as I doubt super intelligent AI models will see it the same way.

      TL;DR the most ethical approach is to not use AI for any purpose (and this is coming from someone who used it extensively before realizing the implications and deciding to stop)

      • arthur@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Other way to answer that is to acknowledge that you have as a premise that those models are somewhat self aware. Can you explain why you believe that?

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        Using AI is no more unethical than using a motor or a simple lever. It is literally a machine and not actually contemplating its intelligence, it is spitting out words that resemble words written by humans who contemplated their intelligence like a fancy funhouse mrror.

        This is why the terminology trying to equate AI to actuall intelligence like hallucinations pisses me off. There is no actual intenet behind the output of AI. It doesn’t feel or want or have motivation. It is a clever mimic at best.