Right now it seems like its “A.I.”. Still big now are the wars in the Middle East and Ukraine. Recently we had COVID 19.
What’s next?
Right now it seems like its “A.I.”. Still big now are the wars in the Middle East and Ukraine. Recently we had COVID 19.
What’s next?
It’s still very much AI for a while. The current incarnation is still in relative infancy, and will only continue to get more capable and disruptive. We’re starting to see the integration with robotics, this is only going to become more significant with time.
It’s likely that the next big thing will be a consequence of AI.
The current AI boom is all based on a single paper from about 7 years ago, and has been achieved by just throwing more and more computing power at it. There has been basically no meaningful architecture improvements in that time and we are already seeing substantial fall off from throwing more power at the problem. I don’t think its a given at all that we are close to the kind of disruption you are predicting.
I don’t understand this deliberately pessimistic perspective I keep seeing around AI development that stubbornly ignores every other technological development in history. Even just considering the singular transformer architecture, we’re still seeing significant and novel improvement. In just a couple years we’ve watched the technology go from basic predictive text to high quality image and even video generation, now to real time robotics control.
The transformer architecture is incredibly powerful and flexible. The notion that the basic technology staying the same is an indication of stagnation is as ridiculous as if you said the same of transistors half a century ago. Most of the improvement we see in the near future will be through recursive and multi-modal applications, meta-architechtural developments that don’t require the core technology to change at all.
I see AI as something that will go the way of VR or cryptocurrencies or self-driving cars, it won’t fully go away but people will realize that it is not suitable for nearly the number of use cases or improving as quickly as it was claimed it would and will sort of forget about it in most of the areas where it is not really improving anything.
I think you’re failing to create a distinction between AI and LLMs.
We use AI every day already. Fuzzy logic, state machines, video game NPCs. All very useful, even if some of it is only used for fun. AI is good. LLMs are only one kind of AI.
Yeah, I was mostly talking about the kinds that appeared in the hype wave in the last few years. Those are not just LLMs though, also generative AI for images and videos and image recognition / classification among others.
If you think Compiter Vision research has used up it’s limited uses then I can’t even begin to understand where you’re coming from.
Natural language processing even more so, there endless big uses we’re going to see - its like looking at bells telephone and saying ‘well the 6 people in town who like to chat have one so it’ll probably fizzle out’ or ‘this train is impressive Mr Stevenson but you’re delivering coal there’s nothing else we need trains for’
Computer vision is one of those areas that promised the world but failed to deliver on many of its promises. I am not saying it won’t be useful in a couple of decades for more complex scenarios than what we already have (e.g. given an image of a face recognize if it is the same face or find the rectangle in this image kind of tasks) but this whole “it will revolutionize everything in the next 5 years” nonsense is clearly wrong. Self-driving cars are one of the main fields that shows that computer vision still has severe limits.
And natural language processing is even more broken. Again, I am not saying it wouldn’t be useful if it worked, I am saying it doesn’t work nearly as well as people claim it does and it is not improving as quickly either.
I am not doubting the potential of the working technology, I am doubting that it works. Big difference compared to your historic example.
AI is currently being used in both the wars OP mentioned.
Its primary use is always going to be in Surveillance Capitalism. The idea we can get nice things from it is mainly a consolation prize.
I mean yes I can now get AI to draw me a picture or write me an editorial. But meanwhile the IDF can get AI to choose people to kill and use the Wheres Daddy AI program to tell them when someone is at home so they can deliberately bomb him with his family.
So yeah it isn’t much for consumers but it’s not going away for use on us.
I think those use cases show how particularly bad AI really is considering how many wrong targets they have been bombing and how many bad recommendations consumers still get.
“The internet has reached the peak of its usability and will never progress much past it’s current level”
This is you in 1997.
I’m not saying AI can’t be disruptive. I’m saying we aren’t there. The steady progress you think you are seeing is bought with increased processing power, the science isn’t advancing steadily, it advances in unpredictable jumps. Because the performance gained with processing power is reaching its peak, we’ll need at least another one of those unpredictable jumps for it to get to a state that will do what the comment I was responding to was claiming. It could be another 50 years before that happens, or it could be tomorrow.
Was there actually a statistical argument for that? IIRC the main argument was most people wouldn’t have a use for it, in the guy’s opinion.
There’s stats for this. It’s not certain, but “we’re nearly at peak LLM” has become a reasonable guess in the last few months.
Which paper is that?
https://arxiv.org/pdf/1706.03762.pdf
Thanks for source
I think we need to hit the wall and start over with what we learned a few time over to really progress.