The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.
What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.
In a few generations of chips, the silicon will have made progress in catching up with the compute workload, and cost per task will drop. That’s the innovation to watch out for now, who will de-throne Nvidia and its H100?
This is why I, as a user, am far more interested in open-source projects that can be run locally on pro/consumer hardware. All of these cloud services are headed down the crapper.
My prediction is that in the next couple years we’ll see a move away from monolithic LLMs like ChatGPT and toward programs that integrate smaller, more specialized models. Apple and even Google are pushing for more locally-run AI, and designing their own silicon to run it. It’s faster, cheaper, and private. We will not be able to run something as big as ChatGPT on consumer hardware for decades (it takes hundreds of gigabytes of memory at minimum), but we can get a lot of the functionality with smaller, faster, cheaper models.
Definitely. I have experimented with image generation on my own mid-range RX GPU and though it was slow, it worked. I have not tried the latest driver update that’s supposed to accelerate those tools dramatically, but local AI workstations with dedicated silicon are the future. CPU, GPU, AIPU?
You’re right, I shouldn’t say decades. It will be decades before that’s standard or common in the consumer space, but it could be possible to run on desktops within the next generation (~5 years). It’d just be very expensive.
High-end consumer PCs can currently support 192GB, and that might increase to 256 within this generation when we get 64GB DDR5 modules. But we’d need 384 to run BLOOM, for instance. That requires a platform that supports more than 4 DIMMs, e.g. Intel Xeon or AMD Threadripper, or 96GB DIMMs (not yet available in the consumer space). Not sure when we’ll get consumer mobos that support that much.
Technically I could upgrade my desktop to 192GB of memory (4x48). That’s still only about half the amount required for the largest BLOOM model, for instance.
To go beyond that today, you’d need to move beyond the Intel Core or AMD Ryzen platforms and get something like a Xeon. At that point you’re spending 5 figures on hardware.
I know you’re just joking, but figured I’d add context for anyone wondering.
Flash games did not die out because people stopped playing them. The smart phone was created and this changed the entire landscape of small game development.
If it had been killed without an adequate replacement (eg. mobile gaming) then people wouldn’t have let Flash die. There are open-source flash players.
It died because Safari for iPhone supported only open web standards. Flash was also the leading cause of crashes on the Mac because it was so poorly-written. It was also a huge security vulnerability and a leading vector for malware, and Adobe just straight up wasn’t able to get it running well on phones. Flash games were also designed with the assumption of a keyboard and mouse so many could never work right on touchscreen devices.
The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.
What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.
In a few generations of chips, the silicon will have made progress in catching up with the compute workload, and cost per task will drop. That’s the innovation to watch out for now, who will de-throne Nvidia and its H100?
This is why I, as a user, am far more interested in open-source projects that can be run locally on pro/consumer hardware. All of these cloud services are headed down the crapper.
My prediction is that in the next couple years we’ll see a move away from monolithic LLMs like ChatGPT and toward programs that integrate smaller, more specialized models. Apple and even Google are pushing for more locally-run AI, and designing their own silicon to run it. It’s faster, cheaper, and private. We will not be able to run something as big as ChatGPT on consumer hardware for decades (it takes hundreds of gigabytes of memory at minimum), but we can get a lot of the functionality with smaller, faster, cheaper models.
Definitely. I have experimented with image generation on my own mid-range RX GPU and though it was slow, it worked. I have not tried the latest driver update that’s supposed to accelerate those tools dramatically, but local AI workstations with dedicated silicon are the future. CPU, GPU, AIPU?
Hundreds of gigabytes of memory in consumer PCs is not decades away. There are already motherboards that accept 128 GB.
You’re right, I shouldn’t say decades. It will be decades before that’s standard or common in the consumer space, but it could be possible to run on desktops within the next generation (~5 years). It’d just be very expensive.
High-end consumer PCs can currently support 192GB, and that might increase to 256 within this generation when we get 64GB DDR5 modules. But we’d need 384 to run BLOOM, for instance. That requires a platform that supports more than 4 DIMMs, e.g. Intel Xeon or AMD Threadripper, or 96GB DIMMs (not yet available in the consumer space). Not sure when we’ll get consumer mobos that support that much.
deleted by creator
Technically I could upgrade my desktop to 192GB of memory (4x48). That’s still only about half the amount required for the largest BLOOM model, for instance.
To go beyond that today, you’d need to move beyond the Intel Core or AMD Ryzen platforms and get something like a Xeon. At that point you’re spending 5 figures on hardware.
I know you’re just joking, but figured I’d add context for anyone wondering.
Don’t worry about the RAM. Worry about the VRAM.
deleted by creator
deleted by creator
Flash games did not die out because people stopped playing them. The smart phone was created and this changed the entire landscape of small game development.
Steve Jobs killed Flash. It was premeditated.
Flash deserved to die
It was atrocious compared to what we have now. But god fucking dammit I love those games. They mean more to me than a lot of AAA studios.
If it had been killed without an adequate replacement (eg. mobile gaming) then people wouldn’t have let Flash die. There are open-source flash players.
Flash games didnt die on their own, the technology was purposefully killed off via similar corporate requirements to maximize profits.
It died because Safari for iPhone supported only open web standards. Flash was also the leading cause of crashes on the Mac because it was so poorly-written. It was also a huge security vulnerability and a leading vector for malware, and Adobe just straight up wasn’t able to get it running well on phones. Flash games were also designed with the assumption of a keyboard and mouse so many could never work right on touchscreen devices.
There you go, lots of reasons care of this person here
deleted by creator
It doesn’t even existing in the medical field, stop lying
deleted by creator
Not a list of devices in use
None of this image reading tech you refer to exists in actual hospitals yet
deleted by creator
GPT already got way shittier from the version we all saw when it first came out to the heavily curated, walled garden version now in use