• jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    119
    arrow-down
    2
    ·
    4 months ago

    Yeah… It’s machine learning with a hype team.

    There are some great applications, but they are very narrow

  • simple@lemm.ee
    link
    fedilink
    English
    arrow-up
    81
    arrow-down
    3
    ·
    edit-2
    4 months ago

    AI was a promise more than anything. When ChatGPT came out, all the AI companies and startups promised exponential improvements that will chaaangeee the woooooorrlllddd

    Two years later it’s becoming insanely clear they hit a wall and there isn’t going to be much change unless someone makes a miraculous discovery. All of that money was dumped in to just make bigger models that are 0.1% better than the last one. I’m honestly surprised the bubble hasn’t popped yet, it’s obvious we’re going nowhere with this.

    • bluGill@kbin.run
      link
      fedilink
      arrow-up
      30
      ·
      4 months ago

      @[email protected]

      @[email protected] @[email protected]

      ai has been doing that trick since the 1950s. There have been a lot of use coming out of ai, but it has never been called ai once successful and never lived up to the early hype. some in the know about all those previous ones were surprised by the hype and not surprised about where it has gone, while others pushed the hype.

      The details have changed but nothing else.

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        4 months ago

        Yeah the only innovation here is that OpenAI had the balls to use the entire internet as a training set. The underlying algorithms aren’t really new, and the limitations have been understood by data scientists, computer scientists, and mathematicians for a long time.

        • Frozengyro@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          4 months ago

          So now it just has to use every conversation that happens as a data set. They could use microphones from all over the world to listen and learn and understand better…

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 months ago

        (Repeating myself due to being banned from my previous instance for offering to solve a problem with nukes)

        Bring back Lisp machines. I like what was called AI when they were being made.

    • henrikx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      13
      ·
      edit-2
      4 months ago

      You should all see the story about the invention blue LEDs. No one believed that it could work except some japanese guy (Shuji Nakamura) who kept working on it despite his company telling him to stop. No one believed it could ever be solved despite being so close. He solved it and the rewards were astronomical.

      This could very well be another case of being so close to a breakthrough. Two years since GPT-3 came out is nothing. If you were paying any sort of attention you would see there are promising papers coming out almost every week. It’s clear there is a lot we don’t know about training neural nets effectively. Our own brains are the proof of that.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        4 months ago

        I mean if you ignore all the papers that point out how dubious the gen AI benchmarks are, then it is very impressive.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        4 months ago

        No one believed that it could work except some japanese guy

        There is a difference in not knowing how to do a thing and someone coming out doing the thing, and knowing how something works, knowing it’s by design limitations, and still hoping it may work out.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        4 months ago

        mwahahah. The people who are working on LLMs right now are the dumbasses and MBAs of the industry. If we ever get anything like an artificial general AI, it will come from a team of serious researchers / engineers who don’t give a shit about marketing.

    • whyNotSquirrel
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      I don’t think they were really trying, it was just an easy way to get funds no?

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        4 months ago

        There are millions of people devoting huge amounts of time and energy into improving AI capabilities, publishing paper after paper finding new ways to improve models, training, etc. Perhaps some companies are using AI hype to get free money but that doesn’t discredit the hard work of others.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          4 months ago

          There are millions of people devoting huge amounts of time and energy into improving AI capabilities,

          millions of students who bought into the marketing bullshit, you mean.

        • henrikx@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          5
          ·
          edit-2
          4 months ago

          Can’t believe you get downvoted for saying that. No worries though as the haters will all be proven wrong eventually.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    6
    ·
    4 months ago

    I remember saying a year ago when everybody was talking about the AI revolution: The AI revolution already happened. We’ve seen what it can do, and it won’t expand much more.

    Most people were shocked by that statement because it seemed like AI was just getting started. But here we are, a year later, and I still think it’s true.

    • Sterile_Technique@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      4 months ago

      Those people were talking about the kind of AI we see in sci-fi, not the spellchecker-on-steroids we have today. There used to be a distinction, but marketing has muddied those waters. The sci-fi variety has been rebranded “AGI” so I guess the rest of that talk would go right along with it - the ‘AGI singularity’ and such.

      All still theoretically possible, but I imagine climate will take us out or we’ll find some clever new way to make ourselves extinct before real AI …or AGI… becomes a thing.

    • OutlierBlue@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      15
      ·
      4 months ago

      The AI revolution already happened. We’ve seen what it can do, and it won’t expand much more.

      That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more. Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.

      • HackyHorse3000@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        4 months ago

        That’s the thing though, that’s not comparable, and misses the point entirely. “AI” in this context and the conversations regarding it in the current day is specifically talking about LLMs. They will not improve to the point of general intelligence as that is not how they work. Hallucinations are inevitable with the current architectures and methods, and they lack a inherent understanding of concepts in general. It’s the same reason they can’t do math or logic problems that aren’t common in the training set. It’s not intelligence. Modern computers are built on the same principals and architectures as those calculators were, just iterated upon extensively. No such leap is possible using large language models. They are entirely reliant on a finite pool of data to try to mimic most effectively, they are not learning or understanding concepts the way “Full-AI” would need to to actually be reliable or able to generate new ideas.

        • chrash0@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          10
          ·
          4 months ago

          it’s super weird that people think LLMs are so fundamentally different from neural networks, the underlying technology. neural network architectures are constantly improving, and LLMs are just a product of a ton of research and an emergence after the discovery of the transformer architecture. what LLMs have shown us is that we’re definitely on the right track using neural networks to solve a wide range of problems classified as “AI”

          • HackyHorse3000@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            ·
            4 months ago

            I think the main problem is applying LLM outside the domain of “complete this sentence”. It’s fine for what it is, and trained on huge datasets it obviously appears impressive, but it doesn’t know if it’s right or wrong, and evaluation metrics are different. In most traditional applications of neural networks, you have datasets with right and wrong answers, that’s not how these are trained, as there is no “right” answer to “tell me a joke.” So the training has to be based on what would likely fill in the blank. This could be an actual joke, a bad joke, a completely different topic, there’s no difference in the training data. The biases, incorrect answers, all the faults of this massive dataset are inherent in the model, and there’s no fixing that. They are fundamentally different in their application and evaluation (this extends to training) methods from other neural networks that are actually effective at what they do, like image processing and identification. The scope of what they’re trying to do with a finite dataset is not realistic and entirely unconstrained, as compared to more “traditional” neural networks, which are very narrow in scope exactly because of this issue.

      • gedaliyah@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        4 months ago

        Oh, I’m not saying that there won’t one day come a better technology that can do a lot more. What I’m saying is that the present technology will never do much more than it is already doing. This is not an issue of refining the technology for more applications. It’s a matter of completely developing a new type of technology.

        In areas of generative text, summarizing articles and books, as well as writing short portions of code in order to assist humans, creating simple fan art, and meaningless images like avatars, and those stock photos at the top of articles, Perhaps creating short animations, Improving pattern recognition of things like speech and facial recognition… In all of these areas, AI was very rapidly revolutionary.

        Generative AI will not become capable of doing things that it’s not already doing. Most of what it’s replacing are just worse computer programs. Some new technology will undoubtedly be revolutionary in the way that computers were a completely new revolution on top of basic function calculators. People are developing quantum computers, and mapping the precise functions of brain cells. If you want, you can download a completely mapped actual nematode brain right now. You can buy brain cells online, even human brain cells, and put them into computers. Maybe they can even run Doom. I have no idea what the next computing revolution will be capable of, but this one has mostly run its course. It has given us some very incredible tools in a very narrow scope, and those tools will continue to improve incrementally, but there will be no additional revolution.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        4 months ago

        Sure.

        GPT4 is not that. Neither will GPT5 be that. They are language models that marketing is calling AI. They have a very specific use case, and it’s not something that can replace any work/workers that requires any level of traceability or accountability. It’s just “the thing the machine said”.

        Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts. Samsung is now producing an “AI” vacuum that adjusts suction between hardwood and carpet. That’s not new technology. That’s not even a new way of doing that technology. It’s just jumping on the bandwagon.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Marketing latched onto “AI” because blockchain and cloud and algorithmic had gotten stale and media and CEOs went nuts.

          Notably, this also coincided with the first higher interest rate environment in the broader economy in over a decade.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        1
        ·
        4 months ago

        That’s like seeing a basic electronic calculator in the 60s and saying that computing won’t expand much more.

        “Who would ever need more than 640K of RAM?” -Bill Gates

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        edit-2
        4 months ago

        Full-AI isn’t here yet, but it’s coming, and it will far exceed everything that we have right now.

        go back to school, hopefully your next statement won’t sound as dumb.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      AI development is indeed a series of S-curves and we’re currently nearing the peak of the curve. It’s going to be some time before the new S begins.

    • Uplink@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      I think it all depends on how good our tools to detect AI generated content become. If it is not distinguishable, then the internet is probably about to be flooded by AI generated content which in turn means AI is going to be trained more and more with AI content, degrading the model in the process.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        4 months ago

        Not with the current tech. It can go faster, have more detailed output, maybe consume less too, but there seems to be a ceiling on what LLM and their derivative can do. There has been no improvement in that regard, and people that look into it are pretty confident that it won’t happen at this point.

    • Ogmios
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      4 months ago

      I do find the similarities between the function of AI and the function of a corporation to be quite interesting…

  • nondescripthandle@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    4 months ago

    We know, you guys tried using the buzz around it to push down wages. You either got what you wanted and flipped tune, or realized you fell for another tech bro middle-manning unsolicited solutions into already working systems.

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      4 months ago

      Even a stopped clock is right twice a day. Provided it’s an analog clock.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    20
    ·
    4 months ago

    Came here to say, we read last week that the industry spent $600bn on GPUs, they need that investment returned and we’re getting AI whether it’s useful or not… But that’s also written in the article.

  • sugar_in_your_tea
    link
    fedilink
    English
    arrow-up
    16
    ·
    4 months ago

    Wow, I hate Goldman Sachs, but I think they’re on to something here…

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    8
    ·
    edit-2
    4 months ago

    Yeah but it’s Goldman Sachs saying it. Presumably because they haven’t invested in AI.

    Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

    • 0x0@programming.dev
      link
      fedilink
      English
      arrow-up
      22
      ·
      4 months ago

      I’d say they know a thing or two about finance… so maybe they didn’t invest because they see it as overhype?

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      4 months ago

      The problem is experts in AI are biased towards AI (it pays their salaries).

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago

      It’s noteworthy because it’s Goldman Sachs. Lots of money people are dumping it into AI. When a major outlet for money people starts to show skepticism, that could mean the bubble is about to pop.

    • demonsword@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      4 months ago

      Presumably because they haven’t invested in AI.

      Presumably is carrying all the weight of your whole post here

      Perhaps we could get a non-biased opinion and also from an actual expert rather than some finance ghoul who really doesn’t know anything?

      I also hate banks, but usually those guys can sniff out market failures way ahead of the rest of us. All their bacon rides on that, after all

  • mrvictory1@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    4 months ago

    In 2 interviews, the interviewees claim that the investors may lose faith in return of investment if “the killer application of AI” is not available in 18 months. In other words, if the AI is a bubble, my interpretation is that it will burst in only 18 months. EDIT: I am referring to the actual paper https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf?ref=404media.co

    • GreyBeard@lemmy.one
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      Financially? Yeah, AI is a bubble for sure. Gobs of money are being poured in with few results to show for it. That bubble will burst. But just like the dotcom bubble, that doesn’t mean the technology is useless or won’t change the world, just not instantly over night with a single investment, which is what the investment groups expect.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        This technology requires finance. You can’t train a model without millions of dollars.

        If the money goes the technology is dead until the cost of the training machines comes down a few orders of magnitude.

        • bamboo@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 months ago

          At least in the US, the research is fairly isolated from capital markets. The military pours huge amounts of money into research on new tech like this, often over ambitiously and with no real expectation of short term returns. Even if there is a financial bubble burst that shuts down a lot of the commercial operations, universities and military contractors will continue working and publishing papers improving the state of the art until industry decides it’s time to try commercializing it again. It’s the basic pattern that has brought us most of the major tech innovations in the US.

    • Alphane Moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 months ago

      That seems like a fair assumption. I would argue we are at the peak of the bubble and only recently we’ve seen the suits (Goldman Sachs and more broadly analysts at banks) start asking questions about ROI and real use cases.

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    4 months ago

    I mean they aren’t wrong. From an efficiency standpoint, current AI is like using a 350hp car engine to turn a childs rock tumbler, or spin art thingy. Sure, it produces some interesting outputs, at the cost of way too much energy for what is being done. That is the current scenario of using generalized compute or even high end GPUs for AI.

    Best I can tell is, the “way forward” is further development of ASICs that are specific to the model being run. This should increase efficiency, decrease the ecological impact (less electricity usage) and free up silicon and components, possibly decreasing price and increasing availablity of things like consumer graphics cards again (but I won’t hold my breath for that part).