• Jesus_666@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    I remember talking to someone about where LLMs are and aren’t useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn’t been trained on.

    The other person insisted that that is exactly what LLMs are great at. They wouldn’t explain how exactly the LLM was supposed to know how my company’s internal software, which is a trade secret, is structured.

    But hey, I figured I’d give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions… on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.

    Oddly enough, that’s not what my company’s ASDIS is. It’s almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.

    • jonne@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      Yeah, and the way it will confidently give you a wrong answer instead of either asking for more information or saying it just doesn’t know is equally annoying.

      • Jesus_666@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 days ago

        Because giving answers is not a LLM’s job. A LLM’s job is to generate text that looks like an answer. And we then try to coax framework that into generating correct answers as often as possible, with mixed results.