• TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    21 hours ago

    They have a conclusion that they’ve come to the conversation with and anything that challenges that gets down voted without consideration.

    The assumptions you aren’t allowed to challenge, in order: AI is bad; Computer intelligence will never match or compete with human intelligence; computer intelligence isn’t really intelligence at all, it’s this other thing [insert ‘something’ here like statistical inference or whatever].

    “AI is bad” is more of a dictum extending from cultural hedgemony than anything else. It’s an implicit recognition that in many ways, silicon valley culture is an effective looting of the commons, and therefore, one should reject all things that extend from that culture. It’s not a logical or rational argument against AI necessarily, but more of an emotional reaction to the culture which developed it. As a self preservation mechanism this makes some sense, but obviously, it’s not slowing down the AI takeover of all things (which is really just putting highlighter on a broader point that silicon valley tech companies were already in control of major aspects of our lives).

    Computer intelligence never match human intelligence is usually some combination of goal post moving, or a redefining of intelligence on the fly (this I’ve specifically presented for the third critique, because it warrants addressing). This is an old trope that goes back almost to the beginning of computer intelligence (it’s not clear to me our definitions of machine intelligence are very relevant). It quite litterally started with multiplying large numbers. Then, for literally decades, things like chess and strategy, forwards facing notions in time were held up as some thing only “intelligent systems could do”. Then post deep blue, that got relegated to very clever programmers and we changed intelligence to be something about learning. Then systems like Alpha go etc came about, where they basically learned the rules to the game by playing, and we relegated those systems to ‘domain specific’ intelligences. So in this critique you are expected to accept and confirm the moving of goalposts around machine intelligence.

    Finally, it’s the "what computers do isn’t intelligence, it’s some_other_thing.exe™. In the history of machine intelligence, that some other thing has been counting very quickly, having large-ish memory banks, statistical inference, memorization, etc. The biggest issues with this critique, and when you scratch and sniff it, you very quickly catch an aroma of Chomsky leather chair (and more so if we’re talking about LLMs), and maybe even a censer of a Catholic Church. The idea that humans are fundementally different and in some way special is frankly, fundemental, to most western idealogies in a way we don’t really discuss in the context of this conversation. But the concept of spirit, and that there is something “entirely unique” about humans versus “all of the rest of everything” is at the root of Abrahamic traditions and therefore also at the root of a significant portion of global culture. In many places in the world, it’s still heretical to imply that human beings are no more special or unique than the oak or the capibara or flatworm or dinoflagellate. This assumption, I think, is on great display with Chomsky’s academic work on the concept of the LAD, or language acquisition device.

    Chomsky gets a huge amount of credit for shaking up linguistics, but what we don’t often talk about, is how effectively, his entire academic career got relinquished to the dust bin, or at least is now in that pile of papers where we’re not sure if we should “save or throw away”. Specifically, much of Chomsky’s work was predicted on the identification of something in humans which would be called a language acquisition device or LAD. And that this LAD would be found in as a region in human brains and would explain how humans gain language. And just very quickly notice the overall shape of this argument. It’s as old as the Egyptians in at least trying to find the “seat of the soul”, and follows through abrahamism as well. What LLMs did that basically shattered this nothing was show at least one case where no special device was necessary to acquire language; where in fact no human components at all were necessary other than a large corpus of training data; that maybe languages and the very idea of language or language acquisition are not special or unique to humans. LLMs don’t specifically address the issue of a LAD, but they go a step farther in not needing to. Chomsky spent the last of verbal days effectively defending this wrong notion he had (which has already been addressed in neuroscience and linguistics literature), which is an interesting and bitter irony for a linguist, specifically against LLMs.

    To make the point more directly, we lack a good coherent testable definition of human intelligence, which makes any comparisons to machine intelligence somewhat arbitrary and contrived, often to support the interlocutors assumptions. Machine intelligence may get dismissed as statistical inference, sure, but then why can you remember things sometimes but not others? Why do you perform better when you are well rested and well fed versus tired and hungry, if not for there being an underlying distribution of neurons, some of which are ready to go, and some of which are a bit spent and maybe need a nap?

    And so I would advocate caution about investing heavily into a conversation where these assumptions are being made. It’s probably not going to be a satisfying conversation because almost assuredly they assumptee hasn’t dove very deeply into these matters. And look at the downvote ratio. It’s rampant on Lemmy. Lemmy’s very much victim to it’s pack mentality and dog piling nature.