• @SIGSEGV
    link
    English
    1
    edit-2
    11 months ago

    Just saw this today. You should check it out, nitwit: https://www.theguardian.com/science/2023/aug/15/scientists-reconstruct-pink-floyd-song-by-listening-to-peoples-brainwaves

    Edit: “nitwit” was uncalled for, but I do think you are an ignorant person.

    You aren’t magical. You don’t have a soul that talks to Jesus. You’re a bunch of organized electrical signals—a machine. Because your machine is carbon-based doesn’t make you special.

    Edit: Downvote all you want, but we’re all still animals. Most people don’t even believe that simple fact. Then again, most people don’t even understand how their cellphone works.

    • @[email protected]
      link
      fedilink
      English
      511 months ago

      I fundamentally disagree and if that’s your take on humanity I’m scared for our future.

      There is a human element to us. I’m not spiritual at all. I believe when we die the lights just go out and we cease to exist. But there is undoubtedly a part of us that is still far from being replicated in a machine. I’m not saying it won’t happen, I’m saying we’re a long way from it and what we’re seeing out of current AI is nothing even close to resembling intelligence.

      • @SIGSEGV
        link
        English
        911 months ago

        So when it happens, you’ll change your mind? My point is that what we have today is based on interactions in the human brain: neural networks. You can say, “They’re just guessing the next word based on mathematical models”, but isn’t that exactly what you’re doing?

        Point to the reason why what comes out of your mouth is any different. Is it because your network is bigger and more complicated? If that’s the case GPT-4 is closer to being human than GPT-3 was, being a larger model.

        I just don’t get your point at all.

        • PupBiru
          link
          fedilink
          111 months ago

          and if that is indeed the point: that the difference is simply size, then what does that law look like? surely it would need to specify a size of the relevant neural network that is able to derive works

          but that’s then just an arbitrary number because we just don’t know what it would be

          • @SIGSEGV
            link
            English
            4
            edit-2
            11 months ago

            I don’t even think that matters much, right? Current LLMs already out-compete humans at many tasks. I think we’re already past the threshold, at least in some regards. That is to say, I don’t think there is a hard line because it depends on what your testing criteria are.