Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

  • nxfsi@lemmy.world
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    14
    ·
    1 year ago

    “AI” are just advanced versions of the next word function on your smartphone keyboard, and people expect coherent outputs from them smh

    • 1bluepixel@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      3
      ·
      1 year ago

      Seriously. People like to project forward based on how quickly this technological breakthrough came on the scene, but they don’t realize that, barring a few tweaks and improvements here and there, this is it for LLMs. It’s the limit of the technology.

      It’s not to say AI can’t improve further, and I’m sure that when it does, it will skillfully integrate LLMs. And I also think artists are right to worry about the impact of AI on their fields. But I think it’s a total misunderstanding of the technology to think the current technology will soon become flawless. I’m willing to bet we’re currently seeing it at 95% of its ultimate capacity, and that we don’t need to worry about AI writing a Hollywood blockbuster any time soon.

      In other words, the next step of evolution in the field of AI will require a revolution, not further improvements to existing systems.

      • postmateDumbass@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I’m willing to bet we’re currently seeing it at 95% of its ultimate capacity

        For free? On the internet?

        After a year or two of going live?

      • tweeks@feddit.nl
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        It depends on what you’d call a revolution. Multiple instances working together, orchestrating tasks with several other instances to evaluate progress and provide feedback on possible hallucinations, connected to services such as Wolfram Alpha for accuracy.

        I think the whole orchestration network of instances could functionally surpass us soon in a lot of things if they work together.

        But I’d call that evolution. Revolution would indeed be a different technique that we can probably not imagine right now.

    • persolb@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      1 year ago

      It is possible to get coherent output from them though. I’ve been using the ChatGPT API to successfully write ~20 page proposals. Basically give it a prior proposal, the new scope of work, and a paragraph with other info it should incorporate. It then goes through a section at a time.

      The numbers and graphics need to be put in after… but the result is better than I’d get from my interns.

      I’ve also been using it (google Bard mostly actually) to successfully solve coding problems.

      I either need to increase the credit I giver LLM or admit that interns are mostly just LLMs.

      • WoahWoah@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Are you using your own application to utilize the API or something already out there? Just curious about your process for uploading and getting the output. I’ve used it for similar documents, but I’ve been using the website interface which is clunky.

          • WoahWoah@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Just FYI, I dinked around with the available plugins, and you can do something similar. But, even easier is just to enable “code interpreter” in the beta options. Then you can upload and have it scan documents and return similar results to what we are talking about here.

      • PrinzMegahertz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I recently asked it a very specific domain architecture question about whether a certain application would fit the need of a certain business application and the answer was very good and showed both a good understanding of architecture, my domain and the application.

    • tryptaminev 🇵🇸 🇺🇦 🇪🇺@feddit.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 year ago

      It is just that everyone now refers to LLMs when talking about AI even though it has sonmany different aspects to it. Maybe at some point there is an AI that actually understands the concepts and meanings of things. But that is not learned by unsupervised web crawling.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      4
      ·
      1 year ago

      In the 1980s, Racter was released and it was only slightly less impressive than current LLMs only because it didn’t have an Internet’s worth of data it was trained on, but it could still write things like:

      Bill sings to Sarah. Sarah sings to Bill. Perhaps they will do other dangerous things together. They may eat lamb or stroke each other. They may chant of their difficulties and their happiness. They have love but they also have typewriters. That is interesting.

      If anything, at least that’s more entertaining than what modern LLMs can output.