• Captain Aggravated
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    It might be possible that a human dictated it and the speech-to-text program transcribed it that way; in most American accents those words are near perfect homophones. Still, -10 points for failure to proofread.

    • comfy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      In fact, I’d assume a bot would be less likely to make a phonetic mistake than a person/

      • Captain Aggravated
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I started to say this in my previous comment, but on things like Youtube shorts, I’ve noticed the baked in subtitles they always have tend to be hilariously inaccurate, even if the video is using a text-to-speech program to read aloud something written on Tumblr or Reddit, so they had the text in the first place… It does speech-to-text, then they run text-to-speech on that.

        LLMs are trained on written text, and I don’t think they would correctly innovate on misspelling. Someone else mentioned the “should of” mistake, which I can see an LLM doing, because it’s a common mistake humans have made. “cost” instead of “caused” isn’t commonly made by humans, so I don’t think an LLM would just come up with it. STT software has been pulling that shit for 30 years now though.

      • chingadera@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Likely. I was thinking that too, but still sort of the same outcome. Journalism is dying a very public death.

    • chingadera@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      You may be on to something. But yes, imagine your whole job is to read, rather than write/read/write/read and you still miss this and many others.