• mindbleach
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    An algorithmic or computational process is a kind of abstract machine we use in our thinking, it is not a thinking machine. This is the lesson we should draw from John Searle’s famous Chinese Room thought experiment.

    Oh fuck off. Can we please erase that fallacy? Even this offhand mention, acknowledging that it’s-- meat chauvinism– treats it as “a vehicle for our thinking.” No! It’s a description of conscious software!

    It goes “Imagine a perfect sci-fi android sitting in a room with Jim, who is an idiot. You slide your calculus homework under the door and Jim has the robot do it. When Jim walks out and hands it to you, he can’t explain what the answers mean. Therefore! The math is wrong, nobody did your homework, and you can ignore any objections from behind the door.”

    Some tenured prick made a bad analogy for which computer parts do what, and we’re all still dealing with it. The purpose of general computing hardware is to understand nothing. It just does what it’s told. Software does the work. A program is an abstract thing, literally an equation, and hardware can only find the right outcome or fail.

    Whatever significant advances may be made in the science of consciousness, consciousness is not and cannot be just a scientific concept.

    Only an explanation in terms of unconscious events would explain consciousness.

    No shit we have a hard time defining it, but that’s where the Turing test came from: at some point the machine is indistinguishable from a person. Either they’ve both got it, or it doesn’t exist.

    Complexity and ambiguity are no excuse. Ask yourself: where is Los Angeles? Can you draw a razor-sharp line around what is, and is not, in Los Angeles? I honestly don’t think so. There’s always going to be a gradient of meaning, intepretation, and pure opinion, for a border that is wide and fuzzy. But you can stand in Trafalgar Square and say, “Not here.”