Yeah I definitely get your point (and I didn’t downvote you, for the record). But I will note that ChatGPT generates text way faster than most people can read, and 4 tokens/second, while perhaps slower than reading speed for some people, is not that bad in my experience.
This isn’t really true — a lot of the newer MoE models run just fine on a CPU coupled with gobs of RAM. Yes, they won’t be quite as fast as a GPU, but getting 128GB+ of VRAM is out of reach of most people.
You can even run Deepseek R1 671b (Q8) on a Xeon or Epyc with 768GB+ of RAM, at 4-8 tokens/sec depending on configuration. A system supporting this would be at least an order of magnitude cheaper than a GPU setup to run the same thing.