This piqued my curiosity so I dug into it a bit on Wikipedia. Most worms are dumb as fuck, roundworms are about as dumb as they come with total neuron counts for a roundworm being comparable to a microscopic tartigrade (300 vs 200). Most of this is located in the head of the worm in a brain like structure though, so I’m betting the clones develop their brains independently with no information transfer. I doubt there’s a ton of learning/memory forming going on at all though, based on how simple worms are, so it’s probably functionally identical. I would be surprised if most worm species exhibit any kind of learned behaviors ever.
Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?
Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?
I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.
Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research thsg deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious.
I feel like the last part is something the AI from the paperclip thought experiment would do.
I can see why Techbros would want such gorgeous invertebrates as pets but as long as they have enough enrichment in their enclosure but I would hardly call keeping these primitive worms slavery. Any kind of exotic pet always raises questions of ethicality so I understand why you’d be concerned. Do you personally know some people in the tech industry that keep these? How big a terrarium do they need and what kinds of plants and substrate do they prefer?
Is each a clone of the original with the same memories? Or are they their own “personalities”?
This piqued my curiosity so I dug into it a bit on Wikipedia. Most worms are dumb as fuck, roundworms are about as dumb as they come with total neuron counts for a roundworm being comparable to a microscopic tartigrade (300 vs 200). Most of this is located in the head of the worm in a brain like structure though, so I’m betting the clones develop their brains independently with no information transfer. I doubt there’s a ton of learning/memory forming going on at all though, based on how simple worms are, so it’s probably functionally identical. I would be surprised if most worm species exhibit any kind of learned behaviors ever.
https://en.m.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
Techbros will still claim that generative AI possesses less intelligence than the worms as an excuse to keep enslaving them.
The AI that tech bros sell is not alive and does not have “intelligence.”
Does it have more than a worm with only 300 neurons in its brain, or are you one of those crazy religious people who thinks meat is the only thing in the universe that can think because it’s magic or something?
Neither. Why are those the only two options? My answer is that I have spent a little bit of time looking into how these things actually work. It’s surface level only, but it should be enough. Are you one of those crazy people who thinks chatgpt is sentient?
I’m not saying that a “real” AI cannot be built ever, but I for sure am saying that these image generators and chatbots are not it. AI tools are just functions that have no thought. If they start building products with some kind of continuous brain simulations, I’ll seriously rethink my stance.
Those are the only two options because you chose to argue with drag’s point about generative AI being smarter than a worm. You took this bait willingly. You devoted yourself to trying to prove a worm is smarter than ChatGPT. Nobody asked you to do it, you just decided this was what you were going to do today. It’s weird, why would you do that?
“Its weird, why would you do that?”
I have no clue what you’re trying to prove, but I think I’m done with this conversation.
Neither the worm, nor current LLMs, are sapient.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
The sweet release of death.
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research thsg deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious.
I feel like the last part is something the AI from the paperclip thought experiment would do.
I can see why Techbros would want such gorgeous invertebrates as pets but as long as they have enough enrichment in their enclosure but I would hardly call keeping these primitive worms slavery. Any kind of exotic pet always raises questions of ethicality so I understand why you’d be concerned. Do you personally know some people in the tech industry that keep these? How big a terrarium do they need and what kinds of plants and substrate do they prefer?
Jesus christ, man, the implications! Fucking bobverse shit right there.
I would imagine only the original retains memories