There’s a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models

I am under the impression from the video that real neurons are a lot better at learning than simulated ones (and much less power demanding)

Could any ML problems, such as natural language generation be solved using neurons instead and would that be in any way practical?

Ethically at this point is this neuron array considered conscious in any way?

  • kakes
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    8 months ago

    Honestly I’ve wondered this about shining a laser through some kind of laser-etched glass. Only problem is, I have no idea how to represent something like an activation function using only reflection and such.

      • kakes
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        8 months ago

        Haha naw, it’s the same basic idea, just using something inorganic (like glass) to represent a neural network instead of something like biological neurons.

        • flashgnash@lemm.eeOP
          link
          fedilink
          arrow-up
          3
          ·
          8 months ago

          Cool idea, though existing computers are also an inorganic way to representing a neural net

          • kakes
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            8 months ago

            Well, yes, but something like an etched glass would be better in basically every way, if it could be done. (See my other comment in this thread if you want more details)

          • kakes
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            8 months ago

            A neural network is an array of layered nodes, where each node contains some kind of activation function, and each connection represents some weight multiplier. Importantly, once the model is trained, it’s stateless, meaning we don’t need to store any extra data to use it - just inputs and outputs.

            If we could take some sort of material, like a glass, and modify it so that if you shone a light through one end, the light would bounce in such a way as to emulate these functions and weights, you could create an extremely cheap, compact, fast, and power efficient neural network. In theory, at least.