I have so far seen two working AI applications that actually makes sense, both in a hospital setting:
Assisting oncologists in reading cancer images. Still the oncologists that makes the call, but it seems to be of use to them.
Creating a first draft when transcribing dictated notes. Listening and correcting is apparently faster for most people than listening and writing from scratch.
These two are nifty, but it doesn’t make a multi billion dollar industry.
In other words the bubble is bursting and the value / waste ratio looks extremely low.
Say what you want about the Tulip bubble, but at least tulips are pretty.
But I thought LLMs were machine learning, or rather a particular application of it? Have I misunderstood that? Isn’t it all black boxed matrixes of statistical connections?
this is something I’ve been mulling over for a while now too. there are lots of little boring ways in which some of the ML stuff definitely does work, but none of them are in the shape of anything the hypemen have been shouting. and afaict none of them will be able to justify all the investment either (and only some will be able to justify the compute, even then)
couple months back I speculated in one of the threads here that I believe one of the reasons there’s such a hard push to get the llms and shit into as much as possible now is because it’ll be harder to remove after the air starts going on - and thus allow to buy more time/runway/rent-extraction
I have so far seen two working AI applications that actually makes sense, both in a hospital setting:
These two are nifty, but it doesn’t make a multi billion dollar industry.
In other words the bubble is bursting and the value / waste ratio looks extremely low.
Say what you want about the Tulip bubble, but at least tulips are pretty.
This is why you should never allow the use of the marketing term “AI”, and instead always refer to the specific technologies.
The use case for the term “AI” is to conflate things that work (ML) with things that don’t work (LLMs).
Ok, point on language.
But I thought LLMs were machine learning, or rather a particular application of it? Have I misunderstood that? Isn’t it all black boxed matrixes of statistical connections?
they’re related in that sense, but what they learn is which token to generate next.
this is something I’ve been mulling over for a while now too. there are lots of little boring ways in which some of the ML stuff definitely does work, but none of them are in the shape of anything the hypemen have been shouting. and afaict none of them will be able to justify all the investment either (and only some will be able to justify the compute, even then)
couple months back I speculated in one of the threads here that I believe one of the reasons there’s such a hard push to get the llms and shit into as much as possible now is because it’ll be harder to remove after the air starts going on - and thus allow to buy more time/runway/rent-extraction