• peopleproblems
    link
    fedilink
    English
    732 months ago

    I’m going to attract downvotes, but this article doesn’t convince me he’s becoming powerful and that we should be very afraid. He’s a grifter, sleezy, and making a shit ton of money.

    Anyone who has used these tools knows they are useful, but they aren’t the great investment the investors claim they are.

    Being able to fool a lot of people into believing the intelligence doesn’t make it good. When it can fool experts in a field, actively learn, or solve problems without training on the issue, that’s impressive.

    Generative AI is just a new method of signal processing. The input signal, the text prompt, is passed through a function (the model) to produce another signal (the response). The model is produced by a lot of input text, which can largely be noise.

    To get AGI it needs to be able to process a lot of noise, and many different signals. “Reading text” can be one “signal” on a “communication” channel - you can have vision, and sound on it too - body language, speech. But a neural network with human ability would require all five senses, and reflexes to them - fear, guilt, trust, comfort, etc. We are no where near that.

    • @[email protected]
      link
      fedilink
      English
      242 months ago

      Strong agree here. You hit on a lot of the core issues on LLMs, so I’ll say my opinions on the economic aspects.

      It’s been more than a year since chatGPT released this plague of “slap AI on the product and consumers will put their children down for collateral to buy!” which imo we haven’t seen whatsoever. Investors still have a hard-on for the term AI that goes into the stratosphere but even that is starting to change a little.

      Consumers level of AI distrust has risen considerably and consumers have seen past the hype. Wrapping this back around to the CEOs level of power, I just don’t think LLMs are actually going to have enough marketability for general consumers to become juggernaut corpos.

      LLMs absolutely have use cases but they don’t fit into most consumer products. No one wants AI washers or rice cookers or friggin AI spoons and shoehorning them in decreases interest in the product.

      • @sugar_in_your_tea
        link
        English
        42 months ago

        That’s also how I feel about “smart” devices in general. I don’t want a smart refrigerator, I just want it to work. The same goes for other appliances, like my laundry machine, dishwasher, and rice cooker. The one area I kind of want it, TVs, has been ruined by stupid tracking and ads.

        What’s going to kill AI isn’t AI itself, it’s AI being forced into products where it doesn’t make sense, and then ads being thrown in on top to try to make some sort of profit from it.

    • @[email protected]
      link
      fedilink
      English
      102 months ago

      The article seems to be based on a number of flawed premises.

      Firstly, that chatgpt is the only LLM. It’s not, and better, stronger, cheaper alternatives are likely to emerge.

      Secondly, that LLMs are a step on the way to AGI. Like any minute now they’re going to evolve. They’re not, they’re a one trick pony which is making coherent sentences. That’s it.

      • @sugar_in_your_tea
        link
        English
        22 months ago

        Exactly. And that’s why we’re in a bubble. Once the execs are finally convinced by their tech people that LLMs aren’t some kind of magic bullet, we’ll see a pretty big correction. As an investor, I’m not exactly looking forward to that, but as someone who works in tech, I’m honestly not worried about my job.

    • @[email protected]
      link
      fedilink
      English
      92 months ago

      I’m going to attract downvotes

      Not sure anyone ever says this and then has net negative votes. This one is no exception

    • fmstrat
      link
      fedilink
      English
      72 months ago

      The one comment I have here is that you may be overlooking the impact LLMs will have on the tech sector.

      Basically Homeless just created a wasp-shooting real-world first-person shooter machine with high speed, accuracy, and strength motors, controllers, etc, controlled via Python, using Claude with little knowledge of how to do the hardware or software.

      The productivity aspects, especially among those who go through the education system from this day forward, will be forever changed. There are already plenty of developers who wouldn’t give up what they now have access to. Despite the black hole of money it is now, power and wealth will come over time.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        2 months ago

        Homeless just created a wasp-shooting real-world first-person shooter machine with high speed, accuracy, and strength motors, controllers, etc, controlled via Python, using Claude with little knowledge of how to do the hardware or software.

        … Is homeless a company? Are we talking about a video game… a robot… White Anglo Saxon Protestants… What?

        Also how does this relate to LLMs?

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          IIRC, “Basically Homeless” is the name of some content creator and/or YouTube channel.

          • fmstrat
            link
            fedilink
            English
            12 months ago

            Yea, I figured using a proper noun would give a clue, but oh well.

      • peopleproblems
        link
        fedilink
        English
        42 months ago

        I don’t have access to it at work. I like what I am able to do with with my own license of Jet brains ai, but it still leaves a lot to be desired.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      And silicons’ nowhere near as energy efficient as biological neurons. There needs to be a massive energy breakthrough like fusion or actual biological processors becoming a thing to see any significant improvements.

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      I agree overall, but fooling experts isn’t what would make AI valuable. Being able to do valuable tasks would make it valuable. And it’s just not good enough at valuable tasks to be valuable.