A neuromorphic supercomputer called DeepSouth will be capable of 228 trillion synaptic operations per second, which is on par with the estimated number of operations in the human brain
Edit: updated link, no paywall
A neuromorphic supercomputer called DeepSouth will be capable of 228 trillion synaptic operations per second, which is on par with the estimated number of operations in the human brain
Edit: updated link, no paywall
That’s only counting connections. The brain learns by making new connections, through complex location and timing dependent inputs from other neurons. It’s way more complex than the number of connections, and if neuroscientists are still studying the building blocks we don’t have much hope of recreating it.
This also ignores that the brain is not wholly an electrical system. The are all kinds of chemical receptors within the brain that alter all kinds of neurological function. Kid of the reason why drugs are a thing. On small scales we have a pretty good idea how these work, at least for the receptors that we’re aware of. On larger scales it’s mostly guessing at this point. The brain has a knack of doing more than the sum of all parts on a pretty regular basis.
Not to mention the scale and nature of the “dataset” that our brains were trained on. Millions of years of instinct encoded in DNA, plus a few years gathering data from dozens of senses 24/7 (including chemical receptors, like you said) and in turn manipulating our bodies, interacting with the environment, and observing the results. We’ve been doing all of this since embryo.
We can’t just feed a model raw image and text data and expect it’s intelligence to be comparable to ours. However you quantify intelligence/consciousness whatever, the text/image model’s thought processes will be alien to ours, which makes sense because their “environment” is nothing like ours - just text and image input and output.
A computer doesn’t need wet neurotransmitters. It can simulate node activation of all types pretty easily. It just needs to be trained on the proper models. The problem that we will face is choosing when we will believe it to be sentient versus following a complex series of patterns. At a certain point, we’ll have to remember that we are basically machines running code as well. There will come a time when we will start to feel a moral obligation to grant AI citizenship.