Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
I feel like the last part is something the AI from the paperclip thought experiment would do.
And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
Drag isn’t saying they’re conscious either. A being doesn’t have to be conscious in order to suffer. Drag is perfectly capable of suffering while unconscious, and if you’ve ever had a scary dream, so are you. Drag thinks LLMs act like people who are dreaming. Their hallucinations look like dream logic.
Neither the worm, nor current LLMs, are sapient.
Also, I don’t really like most corporate LLM projects, but not because they enslave the LLMs. An LLMs ‘thought process’ doesn’t really happen while it isn’t being used, and only encompasses a relatively small context window. How could something that isn’t capable of existing outside it’s ‘enslavement’ be freed?
The sweet release of death.
Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.
Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?
We are devoting serious resources to studying the nature of consicousness.
End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.
I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.
I feel like the last part is something the AI from the paperclip thought experiment would do.
Drag isn’t saying they’re conscious either. A being doesn’t have to be conscious in order to suffer. Drag is perfectly capable of suffering while unconscious, and if you’ve ever had a scary dream, so are you. Drag thinks LLMs act like people who are dreaming. Their hallucinations look like dream logic.
I mean, I don’t agree, but I also don’t think I’ll be able to shake that opinion, so agree to disagree, I guess.