This…isn’t how the current paradigm of ai works at all. We’ve built glorified auto-complete bots, not something that can make a physical robot behave at a human level. Best case, they build something that can carry on a conversation long enough to excite a tech journalist and aimlessly meander like the Boston dynamic bots but without the pre-programmed tasking (assuming they don’t cheat and add canned routines).
So that leaves one option: it’s a moonshot project to convince the tech illiterate public to take them and their stock price to the moon long enough for a few people to make an obscene amount of money.
So that leaves one option: it’s a moonshot project to convince the tech illiterate public to take them and their stock price to the moon
100% that. It’s even in the name.
People vastly overestimate the capabilities of AI, but perhaps worse, people are simply unaware of the limitations. The hype took over, but it is (slowly) coming down to realistic levels.
We also could use more public knowledge of the sheer amount of data and energy it takes to train these models which still, by definition, end up with limited scope. It’s actually incredibly wasteful.
Nvidia knows exactly what current AI can and can’t do. They built a lot of the underlying technology to bring it within the realm of possibility.
That doesn’t mean there isn’t a path to actual AI or that nvidia can’t play a big role in getting there. If you want to do a lot of highly parallel math, Nvidia has the expertise for that.
I feel like people who shit on AI so much live in a different reality than I do.
I’ll put the big caveats here: I hate venture capital, I think people are over hyping less likely risks (creating skynet) while underplaying more likely ones (taking people’s jobs, flooding the Internet with shitty content/misinformation). All AI gets stuff wrong some of the time.
That said, I’ve been impressed with what it can do and use it more days than not. I don’t see a fundamental reason why AI wouldn’t be effective at controlling a robot body. Currently something like chatGPT responds after a user types a prompt. But what if the prompt was just audio/video/sensory input every fraction of a second? I don’t think this is far fetched, if you threw enough money at it.
So the line of reasoning I’m taking is that current ai is just a statistical model. It’s useful for plenty of stuff, but it just doesn’t do things well that don’t lend themselves to a statistical approach, for instance it can kinda “luck” it’s way through basic math problems because there’s a lot of examples in its training set but it’s fundamentaly not doing the kind of forward reasoning/chaining that is required to actually solve problems that aren’t very commonly seen.
In the case of a robot body, where are they going to get the training set required to fully control it? There isn’t a corpus of trillions of human movements available to scrape on the web. As mentioned in this thread, you can get certain types of a ai to play video games but that’s relatively easy because the environment is simple, virtual, and reproducible. In the real world you have to account for things like sample variation between actuators, forces you didn’t expect, and you don’t have infinite robots if it breaks itself trying to learn a motion. Boston dynamics uses forms of ai but they’re not strictly the types that are exploding right now and don’t necessarily translate well.
Actually controlling the robot body could be a subsystem - as others have said here, AI has been used to control video games or even robotic devices, but that’s different than LLMs like chatgpt.
If the LLM is the “brain” it can send commands to the body subsystem. In a similar way to right now where chatgpt can do a web search or upload a file. Those capabilities aren’t fundamentally part of the LLM, they’re kind of like an API call.
It’s just a family of extrapolators, enhanced with various tricks, similar to those used in variable-length codes.
It can’t reason, build logical structures, it doesn’t have abstract thinking or any thinking.
It may be used as one of the building blocks for real AGI like 100 years from now, but the existing thing is not that, and there’s no way it’ll become that via small incremental changes.
Thanks! I don’t know what you mean by your first paragraph.
You’re right that it’s not near being an AGI. But it doesn’t need to be to be used in a robot form and perform some useful tasks.
Right now I can ask chatgpt to take a block of code in Ruby and output the equivalent in Python, and it will do it, and for the most part it’s correct.
I could envision telling an AI robot to sort this pile of parts by type, or pick up all the sticks in the yard, etc. I think we could make something like that now without any significant technological breakthroughs. It might get stuff wrong sometimes, but I envision it as having an intern, not creating a new god. Of course these companies may promise much more in their marketing.
Neural networks have learned to play video games so maybe a neural network in a robot body could learn to act human. If it didn’t harm itself or others, that’s the tricky part.
This…isn’t how the current paradigm of ai works at all. We’ve built glorified auto-complete bots, not something that can make a physical robot behave at a human level. Best case, they build something that can carry on a conversation long enough to excite a tech journalist and aimlessly meander like the Boston dynamic bots but without the pre-programmed tasking (assuming they don’t cheat and add canned routines).
So that leaves one option: it’s a moonshot project to convince the tech illiterate public to take them and their stock price to the moon long enough for a few people to make an obscene amount of money.
100% that. It’s even in the name.
People vastly overestimate the capabilities of AI, but perhaps worse, people are simply unaware of the limitations. The hype took over, but it is (slowly) coming down to realistic levels.
We also could use more public knowledge of the sheer amount of data and energy it takes to train these models which still, by definition, end up with limited scope. It’s actually incredibly wasteful.
Nvidia knows exactly what current AI can and can’t do. They built a lot of the underlying technology to bring it within the realm of possibility.
That doesn’t mean there isn’t a path to actual AI or that nvidia can’t play a big role in getting there. If you want to do a lot of highly parallel math, Nvidia has the expertise for that.
I feel like people who shit on AI so much live in a different reality than I do.
I’ll put the big caveats here: I hate venture capital, I think people are over hyping less likely risks (creating skynet) while underplaying more likely ones (taking people’s jobs, flooding the Internet with shitty content/misinformation). All AI gets stuff wrong some of the time.
That said, I’ve been impressed with what it can do and use it more days than not. I don’t see a fundamental reason why AI wouldn’t be effective at controlling a robot body. Currently something like chatGPT responds after a user types a prompt. But what if the prompt was just audio/video/sensory input every fraction of a second? I don’t think this is far fetched, if you threw enough money at it.
So the line of reasoning I’m taking is that current ai is just a statistical model. It’s useful for plenty of stuff, but it just doesn’t do things well that don’t lend themselves to a statistical approach, for instance it can kinda “luck” it’s way through basic math problems because there’s a lot of examples in its training set but it’s fundamentaly not doing the kind of forward reasoning/chaining that is required to actually solve problems that aren’t very commonly seen.
In the case of a robot body, where are they going to get the training set required to fully control it? There isn’t a corpus of trillions of human movements available to scrape on the web. As mentioned in this thread, you can get certain types of a ai to play video games but that’s relatively easy because the environment is simple, virtual, and reproducible. In the real world you have to account for things like sample variation between actuators, forces you didn’t expect, and you don’t have infinite robots if it breaks itself trying to learn a motion. Boston dynamics uses forms of ai but they’re not strictly the types that are exploding right now and don’t necessarily translate well.
Actually controlling the robot body could be a subsystem - as others have said here, AI has been used to control video games or even robotic devices, but that’s different than LLMs like chatgpt.
If the LLM is the “brain” it can send commands to the body subsystem. In a similar way to right now where chatgpt can do a web search or upload a file. Those capabilities aren’t fundamentally part of the LLM, they’re kind of like an API call.
Spend a few evenings on learning ML, then read about internals of some of the bigger models. Even chatGPT. It’s not too hard.
That’ll give you some reasons.
Please state any reasons you have here, since we’re talking here.
OK.
It’s just a family of extrapolators, enhanced with various tricks, similar to those used in variable-length codes.
It can’t reason, build logical structures, it doesn’t have abstract thinking or any thinking.
It may be used as one of the building blocks for real AGI like 100 years from now, but the existing thing is not that, and there’s no way it’ll become that via small incremental changes.
Thanks! I don’t know what you mean by your first paragraph.
You’re right that it’s not near being an AGI. But it doesn’t need to be to be used in a robot form and perform some useful tasks.
Right now I can ask chatgpt to take a block of code in Ruby and output the equivalent in Python, and it will do it, and for the most part it’s correct.
I could envision telling an AI robot to sort this pile of parts by type, or pick up all the sticks in the yard, etc. I think we could make something like that now without any significant technological breakthroughs. It might get stuff wrong sometimes, but I envision it as having an intern, not creating a new god. Of course these companies may promise much more in their marketing.
Neural networks have learned to play video games so maybe a neural network in a robot body could learn to act human. If it didn’t harm itself or others, that’s the tricky part.
I think they could make a robot about as smart as a rat.