I, for one…

  • potatopotato
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 months ago

    So the line of reasoning I’m taking is that current ai is just a statistical model. It’s useful for plenty of stuff, but it just doesn’t do things well that don’t lend themselves to a statistical approach, for instance it can kinda “luck” it’s way through basic math problems because there’s a lot of examples in its training set but it’s fundamentaly not doing the kind of forward reasoning/chaining that is required to actually solve problems that aren’t very commonly seen.

    In the case of a robot body, where are they going to get the training set required to fully control it? There isn’t a corpus of trillions of human movements available to scrape on the web. As mentioned in this thread, you can get certain types of a ai to play video games but that’s relatively easy because the environment is simple, virtual, and reproducible. In the real world you have to account for things like sample variation between actuators, forces you didn’t expect, and you don’t have infinite robots if it breaks itself trying to learn a motion. Boston dynamics uses forms of ai but they’re not strictly the types that are exploding right now and don’t necessarily translate well.

    • LesserAbe@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 months ago

      Actually controlling the robot body could be a subsystem - as others have said here, AI has been used to control video games or even robotic devices, but that’s different than LLMs like chatgpt.

      If the LLM is the “brain” it can send commands to the body subsystem. In a similar way to right now where chatgpt can do a web search or upload a file. Those capabilities aren’t fundamentally part of the LLM, they’re kind of like an API call.