LLMs are not actually great at being coders, in spite of the hype.

    • mesamune@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      You joke but, while LLMs are a money maker, the real money will come from those who can provide up to date info! Like Lexas Nexas or other data brokers. The biggest issue for these LLMs is that their training data is no where near what it needs to be. And its quite obvious they only trained on non-corporate public data + whatever slop they could get from reddit.

      The biggest issue isnt the quantity of data, its the quality! Ironic because they are literally flooding the internet with slightly more wrong detail on how to do things.

      • PhilipTheBucket@ponder.cat
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        Honestly, I think OpenAI messed up by making their service available for free. They were following the normal silicon valley model of providing it free and then figuring out the revenue stream later, often by offering an additional paid tier of questionable value which very few people sign up for. That mostly doesn’t even work when your costs are limited to some not-trivial-but-not-exorbitant web hosting. When your costs are as astronomical as it takes to run an LLM, it’s a really bad idea which I think was just born out of imitation.

        If they’d offered GPT-3 as a subscription service that cost $50/month, for use by serious professionals or people with enough cash to spend that on playing around with it, people would have been impressed as hell that it was so cheap. IDK how many people would have signed up, but I can pretty well assure you that they would not be hemorrhaging money like they currently are. Of course, now that they’ve set the expected price point at “free,” there’s no going back.

      • Optional@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 hours ago

        And it’s the thing they can never have, because they don’t understand words. And they never will.

        Every answer the ever give will need to be checked by a human. It won’t be, of course, but that’s why we’ll have decades of fun with messed up AI slop getting into actual communications where we don’t want them to.