Thanks for posting this. The TLDR is that using the checklist no current AI is conscious but that there are no great barriers to one doing so in the future. Also, the article itself does not list the indicators but does link to the the academic paper which does.
An interesting point from the article is that these are indicators of human consciousness, since that’s all we really know for certain. Current AI models could be as conscious as many ‘lower order’ living things because we just don’t know how they experience the world. Given that there are ‘cruelty to animals’ laws on the books all over the world, I wonder how far down the checklist you have to go before ‘cruelty to AI’ becomes a thing.
Current AI models could be as conscious as many ‘lower order’ living things
I find a lot of the assumptions people make about AI consciousness very puzzling.
A frequent assumption is that because consciousness has emerged from animal brain architecture, therefore it follows that it will do so from electronic circuitry. However, it’s entirely possible an AGI thousands of times more capable than an average human brain could have no consciousness at all.
Consciousness might be some unusual quirk that only arises in very specific types of circumstances, and biological brains, by chance, were one of those. Who knows? As no one understands how consciousness arises, we can’t say.
The stronger the claims people make about AI consciousness, the less I have confidence in them.
Somehow I doubt that our brains just happen to be un-simulate-able. There’s no reason to think we couldn’t one day replicate consciousness simply by having the resources to emulate a human brain.
I’d like to think we’re above some ridiculous “Cruelty to AI” sentiment as a species, but then I rewatch “Measure of a Man” and begin to loosely question the ridiculousness of said notion.
Thanks for posting this. The TLDR is that using the checklist no current AI is conscious but that there are no great barriers to one doing so in the future. Also, the article itself does not list the indicators but does link to the the academic paper which does.
An interesting point from the article is that these are indicators of human consciousness, since that’s all we really know for certain. Current AI models could be as conscious as many ‘lower order’ living things because we just don’t know how they experience the world. Given that there are ‘cruelty to animals’ laws on the books all over the world, I wonder how far down the checklist you have to go before ‘cruelty to AI’ becomes a thing.
I find a lot of the assumptions people make about AI consciousness very puzzling.
A frequent assumption is that because consciousness has emerged from animal brain architecture, therefore it follows that it will do so from electronic circuitry. However, it’s entirely possible an AGI thousands of times more capable than an average human brain could have no consciousness at all.
Consciousness might be some unusual quirk that only arises in very specific types of circumstances, and biological brains, by chance, were one of those. Who knows? As no one understands how consciousness arises, we can’t say.
The stronger the claims people make about AI consciousness, the less I have confidence in them.
Somehow I doubt that our brains just happen to be un-simulate-able. There’s no reason to think we couldn’t one day replicate consciousness simply by having the resources to emulate a human brain.
I’d like to think we’re above some ridiculous “Cruelty to AI” sentiment as a species, but then I rewatch “Measure of a Man” and begin to loosely question the ridiculousness of said notion.