Latest generation of products not becoming part of people’s “routine internet use”, researchers say.

  • jroid8@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    6 months ago

    I’m in a country which not only I don’t have access to it but also can’t really afford a subscription. Are GPT-4o or Claude Opus as smart as they say? Can it come up with creative solutions and solve difficult problems which isn’t in it’s dataset?

    • Deceptichum
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      They’re not smart, but they’re helpful for a lot of things.

      • Warl0k3@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        They’re a useful tool (unlike me, a useless tool) but man are they being oversold :(

    • addie@feddit.uk
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      6 months ago

      They’re bullshit generators, essentially - it doesn’t matter to them whether they generate something that’s ‘true’ or not, as long as it’s plausible. Depends what you intend to use them for - if you want a throw-away image for a powerpoint slide that will only be looked at once for a few seconds, they’re ideal. They generate shit code and boring, pointless stories, so couldn’t recommend them for that.

      If you’re a D&D GM that’s in need of quite a lot of ‘disposable’ material, they’re alright. Image of a bad guy that you can then work into the story? Great. Names for every single Gnomish villager? Great. Creating intricate and interesting lore that brings your world alive? No, they are not actually intelligent and cannot do that - that’s the part that you provide.

      At the moment, huge amounts of venture capitalist money is making these things much cheaper than their true cost. Can only imagine the price of them is going to go up a lot when that runs out. You might not be able to afford the subscription, but you’ll be in good company soon.

      • kakes
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 months ago

        To address your last point, there have been continual improvements in making LLMs more efficient as well. While I definitely couldn’t run GPT4 on my local machine, I can get decently close to something like 3.5-turbo, which would’ve been unheard of only a year or two ago.

        And from the sounds of it, GPT-4o is another big step in efficiency (though it’s hard to say for certain).