• RainfallSonata@lemmy.world
      link
      fedilink
      English
      arrow-up
      45
      arrow-down
      1
      ·
      11 months ago

      Although DeWave only achieved just over 40 percent accuracy based on one of two sets of metrics in experiments conducted by Lin and colleagues, this is a 3 percent improvement on the prior standard for thought translation from EEG recordings.

      The Australian researchers who developed the technology, called DeWave, tested the process using data from more than two dozen subjects. Participants read silently while wearing a cap that recorded their brain waves via electroencephalogram (EEG) and decoded them into text.

      Yep.

      • TheMurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        ·
        11 months ago

        When the number og test subjects is that low, it almost feels like the 3% improvement might as well be a coincidence.

        • yokonzo@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          11 months ago

          This is wonderful news, it means it’s good enough to operate my lights with a thought but not good enough to be admissable in court as evidence

      • HubertManne@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        their goal is 90%. I could see it if the ai was given a long enough time with feedback on what you are doing. Which I think would be tough with stroke patients. Great for folks that would like to control a pc with thoughts but not get cut open though.

      • merc
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        Participants read silently while wearing a cap that recorded their brain waves via electroencephalogram (EEG) and decoded them into text.

        Was the AI trained on the text that the people were reading?

    • hansl@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      How much accuracy would you be happy with? Anything more than 25% in my book is better than anyone else. And the tech is just getting better.

      How much would it need to be at to beat a polygraph?

  • toiletobserver@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    11 months ago

    The Babel fish is small, yellow, leech-like, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier, but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centres of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language

  • badbytes@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    11 months ago

    Spent over 15yrs studying brain activity in EEG MRI and MEG. Seems like a far stretch, given our ability to accurately access signals. Brain is complicated, and signals like EEG are very poor reflector of specific signals. Like when you view city street lights at night. Pretty, but what can you decipher.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    11 months ago

    With further refinement, DeWave could help stroke and paralysis patients communicate and make it easier for people to direct machines like bionic arms or robots.

    The article doesn’t even hint at any use in a justice system. There’s nothing to suggest that this could even in principle be used as a lie detector.

  • TheWonderfool@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    Ignoring the technology itself, I found it interesting that it has a lot less trouble with verbs compared to nouns (tho the article does not give much information about it).

    Would it mean that humans keeps actions very separate (even if similar), while keeping things and concepts more clustered together? Is being precise on what is happening much more important than clearly specifying the subject and object of the action?

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 months ago

      I’d wager that humans have much more neural hardware relating to verbs, since they relate to the things you yourself do over longer periods. Let’s say I clean my bathroom, or my kitchen, or something else - the actions I take are very similar, and my head has to keep my body doing the right thing for long stretches of time. It’s much harder to clean the wrong thing than to clean the thing wrong.

    • Belgdore@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      There are fewer verbs than nouns I’m sure our brains prioritize accordingly

  • phubarr@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    This is definitely progress, but we need to keep in mind that the particular language a person speaks can significantly influence how a person’s brain works.

    • Jax
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      AI like this will likely need to be trained from person to person.

  • Matriks404@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    11 months ago

    So is it only useful for people who silently read? Because I don’t see any use case if so, it is not like we think using words, lol.

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    Later on they find the accuracy is about as good as the whole facial recognition fiasco