• WhatAmLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    90
    arrow-down
    5
    ·
    2 months ago

    The results of this new GSM-Symbolic paper aren’t completely new in the world of AI researchOther recent papers have similarly suggested that LLMs don’t actually perform formal reasoning and instead mimic it with probabilistic pattern-matching of the closest similar data seen in their vast training sets.

    WTF kind of reporting is this, though? None of this is recent or new at all, like in the slightest. I am shit at math, but have a high level understanding of statistical modeling concepts mostly as of a decade ago, and even I knew this. I recall a stats PHD describing models as “stochastic parrots”; nothing more than probabilistic mimicry. It was obviously no different the instant LLM’s came on the scene. If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive…

    • no banana@lemmy.world
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      2
      ·
      2 months ago

      It’s written as if they literally expected AI to be self reasoning and not just a mirror of the bullshit that is put into it.

      • Sterile_Technique@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        arrow-down
        3
        ·
        2 months ago

        Probably because that’s the common expectation due to calling it “AI”. We’re well past the point of putting the lid back on that can of worms, but we really should have saved that label for… y’know… intelligence, that’s artificial. People think we’ve made an early version of Halo’s Cortana or Star Trek’s Data, and not just a spellchecker on steroids.

        The day we make actual AI is going to be a really confusing one for humanity.

          • Semperverus@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            2 months ago

            This problem is due to the fact that the AI isnt using english words internally, it’s tokenizing. There are no Rs in {35006}.

          • Sterile_Technique@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            2 months ago

            That was both hilarious and painful.

            And I don’t mean to always hate on it - the tech is useful in some contexts, I just can’t stand that we call it ‘intelligence’.

        • Farid@startrek.website
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          20
          ·
          2 months ago

          To say it’s not intelligence is incorrect. It’s still (an inferior kind of) intelligence, humans just put certain expectations into the word. An ant has intelligence. An NPC in a game has intelligence. They are just very basic kinds of intelligence, very simple decision making patterns.

          • AwesomeLowlander
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            2 months ago

            An NPC in a game has intelligence

            By what definition of the word? Most dictionaries define it as some variant of ‘the ability to acquire and apply knowledge and skills.’

            • Farid@startrek.website
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              1
              ·
              edit-2
              2 months ago

              Of course there are various versions of NPCs, some stand and do nothing, others are more complex, they often “adapt” to certain conditions. For example, if an NPC is following the player it might “decide” to switch to running if the distance to the player reaches a certain threshold, decide how to navigate around other dynamic/moving NPCs, etc. In this example, the NPC “acquires” knowledge by polling the distance to the player and applies that “knowledge” by using its internal model to make a decision to walk or run.

              The term “acquiring knowledge” is pretty much as subjective as “intelligence”. In the case of an ant, for example, it can’t really learn anything, at best it has a tiny short-term memory in which it keeps certain most recent decisions, but it surely gets things done, like building colonies.

              For both cases, it’s just a line in the sand.

              • Auli@lemmy.ca
                link
                fedilink
                English
                arrow-up
                4
                arrow-down
                3
                ·
                edit-2
                2 months ago

                NPCs do not have any form of intelligence and don’t decide anything. Or is Windows intelligent cause I click an icon and it decides to do something?

                • Farid@startrek.website
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  arrow-down
                  1
                  ·
                  2 months ago

                  In a way, yes, if you frame it right. To simplify, you’re basically asking “is a calculator intelligent?”, right? While it’s an inanimate object, you could say that, in a way, it acquires knowledge from the buttons user presses and it applies knowledge to provide an output.

                  “But that’s not making decisions, it’s just circuits!”, you might say. To which I might reply “Who’s to say that you’re making decisions? For all we know, human brains might also just be very complicated circuits with no agency at all, just like the calculator!”.

                  IIRC, in his book The Singularity Is Near, Ray Kurzweil even assigns certain amount of intelligence to inanimate objects, such as rocks. A very low amount of course, and it might be a stretch, but still.

                  So yeah, it’s really hard to draw a line for intelligence, which is why there’s no firm definition and no consensus.

          • aesthelete@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            edit-2
            2 months ago

            To follow rote instructions is not intelligence.

            If following a simple algorithm is intelligence, then the entire field of software engineering has been producing AI since its inception rendering the term even more meaningless than it already is.

            • Farid@startrek.website
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              2 months ago

              Opponent players in games have been labeled AI for decades, so yeah, software engineers have been producing AI for a while. If a computer can play a game of chess against you, it has intelligence, a very narrowly scoped intelligence, which is artificial, but intelligence nonetheless.

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                2 months ago

                https://www.etymonline.com/word/intelligence

                Simple algorithms are not intelligence. Some modern “AI” we have comes close to fitting some of these definitions, but simple algorithms do not.

                We can call things whatever we want, that’s the gift (and the curse) of language. It’s imprecise and only has the meanings we ascribe to it, but you’re the one who started this thread by demanding that “to say it is not intelligence is incorrect” and I’ve still have yet to find a reasonable argument for that claim within this entire thread. Instead all you’ve done is just tried to redefine intelligence to cover nearly everything and then pretended that your (not authoritative) wavy ass definition is the only correct one.

                • Farid@startrek.website
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  2 months ago

                  I’m not redefining anything, I’m just pointing out that intelligence is not as narrow as most people assume, it’s a broad term that encompasses various gradations. It doesn’t need to be complex or human-like to qualify as intelligence.

                  A single if statement arguably isn’t intelligence, sure, but how many if statements is? Because at some point you can write a complex enough sequence of if statements that will exhibit intelligence. As I was saying in my other comments, where do we draw this line in the sand? If we use the definition from the link, which is:

                  The highest faculty of the mind, capacity for comprehending general truths.

                  Then 99% of animal species would not qualify as intelligent.

                  You may rightfully argue that term AI is too broad and that we could narrow it down to mean specifically “human-like” AI, but the truth is, that at this point, in computer science AI already refers to a wide range of systems, from basic decision-making algorithms to complex models like GPTs or neural networks.

                  My whole point is less about redefining intelligence and more about recognizing its spectrum, both in nature and in machines. But I don’t expect for everybody to agree, even the expert in the fields don’t.

                  • aesthelete@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    2 months ago

                    I’m not redefining anything, I’m just pointing out that intelligence is not as narrow as most people assume, it’s a broad term that encompasses various gradations.

                    “I’m not redefining anything, I’m just insisting that my definition of the term is the only correct one.”

                    You’re running a motte-and-bailey here. First you say someone else is definitively “not correct” in their usage of the term, and then you go on to make a more easily defensible argument of “well who is to say what the meaning of the term truly is? It’s a very gray area”.

                    Then 99% of animal species would not qualify as intelligent.

                    By some definitions, certainly…and that’s the whole point.

                    You may rightfully argue that term AI is too broad and that we could narrow it down to mean specifically “human-like” AI, but the truth is, that at this point, in computer science AI already refers to a wide range of systems, from basic decision-making algorithms to complex models like GPTs or neural networks.

                    I think taken as a whole the term “AI” has more meaning if you take both words in the phrase into account together rather than separately.

                    For instance, computer opponents in early video games naturally fit the moniker “AI” because even though it obviously does not possess intelligence in the general sense of the term, the developers are trying to artificially fool you into thinking it does.

                    Ultimately, it’s probably futile to try to rescue the phrase from the downward spiral it is on into meaninglessness, but I do not believe the word “intelligence” necessarily needs to spiral down in concert.

            • Semperverus@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              4
              ·
              edit-2
              2 months ago

              Its almost as if the word “intelligence” has been vague and semi-meaningless since its inception…

              Have we ever had a solid, technical definition of intelligence?

              • aesthelete@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                2 months ago

                I’m pretty sure dictionaries have an entry for the word, and the basic sense of the term is not covered by writing up a couple of if statements or a loop.

          • kryptonite@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            2 months ago

            humans just put certain expectations into the word.

            … which is entirely the way words work to convey ideas. If a word is being used to mean something other than the audience understands it to mean, communication has failed.

            By the common definition, it’s not “intelligence”. If some specialized definition is being used, then that needs to be established and generally agreed upon.

            • Farid@startrek.website
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              edit-2
              2 months ago

              I would put it differently. Sometimes words have two meanings, for example a layman’s understanding of it and a specialist’s understanding of the same word, which might mean something adjacent, but still different. For instance, the word “theory” in everyday language often means a guess or speculation, while in science, a “theory” is a well-substantiated explanation based on evidence.

              Similarly, when a cognitive scientist talks about “intelligence”, they might be referring to something quite different from what a layperson understands by the term.

    • jabathekek@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      2 months ago

      describing models as “stochastic parrots”

      That is SUCH a good description.

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      5
      ·
      2 months ago

      Clearly this sort of reporting is not prevalent enough given how many people think we have actually come up with something new these last few years and aren’t just throwing shitloads of graphics cards and data at statistical models

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      If only tech journalists bothered to do a superficial amount of research, instead of being spoon fed spin from tech bros with a profit motive…

      This is outrageous! I mean the pure gall of suggesting journalists should be something other than part of a human centipede!

    • jimmy90@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      i think it’s because some people have been alleging reasoning is happening or is very close to it