Video game support developer Keywords Studios tried to create a game solely using artificial intelligence but failed because the technology was "unable to replace talent".
I don’t see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it’s just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.
Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn’t really make any claims about the timescale of it but rather just tires to point out the inevitability of it.
I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.
In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y
You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.
It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.
Current AI*
I don’t see any reason to expect this to be the case indefinitely. It has been getting better all the time and lately been doing so at a quite rapid pace. In my view it’s just a matter of time untill it surpasses human capabilities. It can already do so in specific narrow fields. Once we reach AGI all bets are off.
Maybe this comment will age poorly, but I think AGI is a long way off. LLMs are a dead-end, IMO. They are easy to improve with the tech we have today and they can be very useful, so there’s a ton of hype around them. They’re also easy to build tools around, so everyone in tech is trying to get their piece of AI now.
However, LLMs are chat interfaces to searching a large dataset, and that’s about it. Even the image generators are doing this, the dataset just happens to be visual. All of the results you get from a prompt are just queries into that data, even when you get a result that makes it seem intelligent. The model is finding a best-fit response based on billions of parameters, like a hyperdimensional regression analysis. In other words, it’s pattern-matching.
A lot of people will say that’s intelligence, but it’s different; the LLM isn’t capable of understanding anything new, it can only generate a response from something in its training set. More parameters, better training, and larger context windows just refine the search results, they don’t make the LLM smarter.
AGI needs something new, we aren’t going to get there with any of the approaches used today. RemindMe! 5 years to see if this aged like wine or milk.
Yeah LLMs might very well be a dead-end when it comes to AGI but just like chatGPT seemingly came out of nowhere and took the world by surprise, this might just aswell be the case with an actual AGI aswell. My comment doesn’t really make any claims about the timescale of it but rather just tires to point out the inevitability of it.
Removed by mod
I didn’t say it wasn’t amazing nor that it couldn’t be a component in a larger solution but I don’t think LLMs work like our brains and I think the current trend of more tokens/parameters/training LLMs is a dead-end. They’re simulating the language area of human brains, sure, but there’s no reasoning or understanding in an LLM.
In most cases, the responses from well-trained models are great, but you can pretty easily see the cracks when you spend extended time with them on a topic. You’ll start to get oddly inconsistent answers the longer the conversation goes and the more branches you take. The best fit line (it’s a crude metaphor, but I don’t think it’s wrong) starts fitting less and less well until the conversation completely falls apart. That’s generally called “hallucination” but I’m not a fan of that because it implies a lot about the model that isn’t really true. Y
You may have already read this, but if you haven’t: Steven Wolfram wrote a great overview of how GPT works that isn’t too technical. There’s also a great sci-fi novel from 2006 called Blindsight that explores the way facsimiles of intelligence can be had without consciousness or even understanding and I’ve found it to be a really interesting way to think about LLMs.
It’s possible to build a really good Chinese room that can pass the Turing test, and I think LLMs are exactly that. More tokens/parameters/training aren’t going to change that, they’ll just make them better Chinese rooms.
Removed by mod
Removed by mod