• magnetosphere@kbin.social
    link
    fedilink
    arrow-up
    46
    arrow-down
    1
    ·
    1 year ago

    Well, of course “demand” is shrinking. AI was the hot new thing, everybody played with it, and its flaws and limitations were quickly discovered. People learned that its uses are much more limited than the hype suggests.

    Plus, if you were expecting science-fiction level AI (as in, a computer that could actually think and reason like a person) you were in for some major disappointment.

    • SokathHisEyesOpen@lemmy.ml
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      1 year ago

      Every time I read this sort of sentiment on Lemmy I’m just totally confused. Have you actually worked with ChatGPT yet? Have you asked it to do things for you and given it very clear instructions like you would a new employee? I’ve been completely amazed by it. It has improved my productivity at work by probably 600%. It also helps me edit my emails for tone, and clarity, and can format shit that would take me hours in like half a second.

      • robbieIRL
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Everytime I read this sort of sentiment in a comment I’m just totally confused. Do you think Microsoft only looked at your usage for their reports?

        Joining aside, No one is denying it works for you, but the article is suggesting that’s not the same for all.

      • Corkyskog
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Haha, I thought that they were going to say “Of course demand is shrinking because OpenAI put a rate limit in for unpaid users”

        I haven’t used OprnAI directly yet. Earlier there were some programs that would access the API that you could use on mobile, I have used that. When you start an account now you get a certain amount of tokens to use. I have no idea how quickly I will go through the prompt tokens, so U haven’t used it at all. I am waiting until I need to use it for something important.

        Edit: Apparently the tokens are only for the API as another user has informed me.

        • XTornado@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Just in case is not clear for other people the token thing is just for the API. For using their website is unlimited as long as is GPT3 of course.

          • Corkyskog
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            1 year ago

            Oh, neat! I am really glad you made this comment, I was terrified I would burn through all my tokens and then be locked out without a subscription.

            They must be trying to clamp down on those mobile apps that are essentially making miney off them as a 3rd party portal.

    • PM_ME_YOUR_ZOD_RUNES
      link
      fedilink
      arrow-up
      15
      arrow-down
      5
      ·
      edit-2
      1 year ago

      I disagree that it has limited uses and I do believe it is a big step towards science fiction level AI. I use it almost every day. It’s great for so many things, cooking, spelling/grammar, coding, brainstorming and information to name a few.

      I’m pretty tech savvy but know nothing about coding. Using ChatGPT I was able to create VBA code for work that will save me and my team 100’s of hours per year. It took a lot of time, patience and troubleshooting but I managed to get something that suits our needs exactly and functions as I want it to. I would of never done this otherwise. ChatGPT made it possible.

      I will admit that it has limitations and can be quite stupid. It won’t do everything and you have to help it along sometimes. But at the end of the day, it is a powerful tool once you learn how to use it.

      • ImFresh3x
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        How do you use it for cooking? I can’t imagine it’s better than having an actual recipe written by someone you trust.

        And for grammar I find grammarly to be way betters

        • PM_ME_YOUR_ZOD_RUNES
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Because you can ask it questions about the recipe it gives. It also gets straight to the point, unlike pretty much every online recipe.

          But for the most part I don’t really follow recipes, so I rarely use it for that. It’s mostly questions about cooking techniques, timings and advice.

          • 6daemonbag@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            I use it all the time for cooking! The only time it ever led me astray was when it essentially answered, “an air fryer is a great way to cook salmon!”

            I wasn’t familiar enough with air fryers yet, and I made very dry fish.

            • cantstopthesignal
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              An air fryer is just a convection oven. Also it probably picked up on the constant hype and advertising for air fryers. They are the go to kitchen gimmick right now.

            • eestileib
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I’ve made good salmon in an air fryer. The timing is tricky.

      • Brocken40
        link
        fedilink
        arrow-up
        12
        arrow-down
        6
        ·
        1 year ago

        It’s not really a step towards Sci fi level ai, it’s just a slightly more advanced version of clicking on the first autopredicted word when you type a sentence on your cell phone. the tools you needed already existed and were stolen are spit out by a very fancy text prediction algorithm.

        • BitSound@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          1 year ago

          I’d disagree, and go so far as to say that it’s a baby AGI, and we need new terms to talk about the future of these approaches.

          To start, “fancy autocomplete” is correct but useless, in the same way that saying the human brain is just a bunch of meat or the like. Assume that we built an autocomplete so good at its job that it knew every move you were about to make and every word you were about to speak. Yes, it’s “just a fancy autocomplete”, but one that must be backed by at least human-level intelligence. At some level of autocomplete ability, there must be a model backing it that can be called “intelligent”, even if that intelligence looks nothing like human intelligence.

          Similarly, the “fancy autocomplete” that is GPT-4 must have some amount of intelligence, and this intelligence is a baby AGI. When AGI is invoked, people tend to get really excited, but that’s what the “baby” qualifier is for. GPT-4 is good at a large variety of tasks without extra training, and this is undeniable. You can quibble about what good means in this context, but it is able to handle simple tasks from “write some code” to “what are the key points in this document?” to “tell me a bedtime story” without being specifically trained to handle those tasks. That was unthinkable a year ago, and is clearly a sign of a model that has been able to generalize across many different tasks. Hence, AGI. It’s not very good at a lot of those tasks (but surprisingly good at a lot of them), but it knows what the task is, and is trying its best. Hence, baby AGI.

          Yeah, it’s got a lot of limitations right now. But hardware is only getting cheaper, and we’re developing techniques like Chain of Thought prompting that lets the LLMs have short-term working memory, which helps immensely. A linguist I know once said that the approaches we’re taking are like building a ladder to the moon. Well, we’ve started building a hell of a ladder, and I’m excited to see where it takes us.

          • Brocken40
            link
            fedilink
            arrow-up
            8
            arrow-down
            3
            ·
            1 year ago

            I don’t care what yall call it, ai, agi, Stacy, it doesn’t change the fact it was 100% trained on books tagged as “bed time stories” to tell you a bed time story, it couldn’t tell you one otherwise.

            Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

            Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

            https://en.m.wikipedia.org/wiki/Chinese_room

            • BitSound@lemmy.world
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              Assuming we made a agi that could predict every word I said perfectly, that would simply prove there is no free will, not that a computer has intelligence.

              But why? Also, “has free will” is exactly equivalent to “i cannot predict the behavior of this object”. This is a whole separate essay, but “free will” is relative to an observer. Nobody thinks a rock has free will. Some people think cats have free will. Lots of people think humans have free will. This is exactly in line with how hard it is to predict the behavior of each. You don’t have free will to an omniscient observer, but that observer must have above human-level intelligence. If that observer happens to have been constructed out of silicon, it doesn’t really make a difference.

              Fundamentally ai produced in the current style cannot be intelligent because it cannot create new things it has not seen before.

              But it can. It uses its prior experience to produce novel output, much like humans do. Hell, I’d say most humans wouldn’t pass your test for intelligence, and in fact they’re just 3 LLMs in a trenchcoat.

              https://en.m.wikipedia.org/wiki/Chinese_room

              Yeah, the reality is that we’ve built a Chinese room. And saying “well, it doesn’t really understand” isn’t sufficient anymore. In a few years are you going to be saying “we’re not really being oppressed by our robot overlords!”?

              • Brocken40
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                1 year ago

                I’m saying if there is anyone, including an omnipotent observer, that can predict a humans actions perfectly that is proof that freewill doesn’t exist at all.

    • cantstopthesignal
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Turns out it was only good for cheating on really basic homework assignments.

      • nanoUFO
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        It was good at answering questions that were already answered on the web and then presenting whatever info it found nicely even if it was wrong.

        • KnightontheSun
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Didn’t stop my mobile game support group from implementing it. I provide clear input on an issue and I receive a polite automated answer that is completely wrong. It only improves once a human gets involved.

  • dragontamer@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 year ago

    Okay, I personally think AI is in a hype cycle. But there’s only one “number” in this discussion and that’s Bing’s market share.

    Now yes, I tried Bing Search AI and sometimes I’m able to get a good result. But honestly? It feels slow, sluggish, laggy. The AI responses are nonsensical as well, the AI in Bing simply merges the top results together into paragraph form… and doesn’t always write cohesive paragraphs as a result. (Ex: if information in link#1 contradicts link#2, and then sentences are merged together, you’ll get an AI-merged, inconsistent sentence/paragraph on those two subjects).

    The AI is impressive at word generation, yes. But is it useful? Jury’s out. I find searching “normally” to be faster and more effective. I dunno if I just need more “AI Training” or “AI Whispering” to get Bing AI to work correctly, but … its not easy to use at all.

    • TheWeirdestCunt@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I’ve tried using the bing ai multiple times months apart hoping that it will have improved in those months but usually it just spits links at me as if I’d just done a normal search, but the links aren’t even relevant. I only tried the google bard thing once and iirc it just did the same thing.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    5
    ·
    1 year ago

    The biggest problem I have with supposed AI (at least the language learning model ones) are that they are nothing more than over glorified text chatbots that can pull information from the internet. Just goes to show that if you take something basic that has been a thing for over a decade and give it a fancy name that idiotic investors will buy into it no matter what.

    Also, I am so glad for the slowdown. We don’t need every single company and person to jump onto the latest tech trend like it’s the next big thing.

    • Corkyskog
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I think it depends what it’s being used for. The text chat bots couldn’t whip up some code from scratch based off a program description.

  • tabarnaski
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    It’s a great tool to create a first draft of almost any type of text. When writing is part of your job and you’re a slow writer like me, it can save you 15+ hours of work per week. But it’s a first draft, you have to fact check and reformulate quite a bit.

  • Goodie@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    There is very little work that LLMs (“AI”) can do without supervision.

    They might seem like a magic bullet, but in reality of you have to keep someone on to supervise the LLM… why not just have them do the work properly.

    • meyotch@slrpnk.net
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Why not? Efficiency.

      I recently used chatGPT4 with a python plugin to develop code to handle a basic large dataset munging task. It took 30 minutes of iteration to get workable code. The actual processing took four hours of compute time and went off without a hitch.

      Can I write that code myself? Yes of course, but that comes at the cost of more of my limited time and attention. Instead I spent the time working on the project plan and pipeline architecture for the larger project.

      I won’t get any points for my ability to slog through a tedious hand-programming chore. The extra focus on the overall project structure is where I will get my reward.

  • perviouslyiner@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    1 year ago

    oh dear, someone got into a trade and really needs NVDA to go down in the short term!

    (today was their earnings call, and a lot of people gambling on trends in the AI market are having a sad wallet day)

    • eestileib
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      NVIDIA is selling the shovels into a gold rush. They’re going to be fine.

  • BallShapedMan@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I’ve been about to cut about 8 hours of work on average a week out of my labor with it. Hoping to get better at it and cut more out.

  • moog@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    1 year ago

    It is a tool like all other tools. It’s helpful and amazing when it works, but for now they are too unreliable and hallucinate too often.

    • Kushan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      I agree with you that Generative AI tools like ChatGPT are still in their early stages of development, and they can be unreliable and produce hallucinations at times. However, I believe that these tools have the potential to be incredibly helpful and beneficial to society, if used responsibly.

      For example, ChatGPT could be used to generate educational content, translate languages, or write creative text formats. It could also be used to help people with disabilities or mental health conditions.

      Of course, there are also risks associated with Generative AI. These tools could be used to create fake news or propaganda, or to spread misinformation. They could also be used to create harmful or offensive content.

      It is important to be aware of the risks of Generative AI, but I believe that the potential benefits outweigh the risks. With careful development and responsible use, these tools can make a positive impact on the world.

      For example, ChatGPT could be used to create educational content that is tailored to individual learners’ needs. It could also be used to translate languages in real time, which could help to break down communication barriers and promote understanding between cultures.

      In the hands of responsible people, Generative AI has the potential to make the world a better place. But it is important to remember that these tools are powerful, and they can be misused. It is up to us to use them wisely."

      I hope this reply addresses your concerns and provides a balanced view of the potential benefits and risks of Generative AI.