I remember when chips first hit 1GHz around 1999. Tech magazines were claiming that we’d hit 7GHz in 5 years.
What they failed to predict is that you start running into major heat issues if you try to go past ~3GHz. Which is why CPU manufacturers started focusing on other ways to improve performance, such as multiple cores and better memory management.
The ultimate fix remains unexplored: reversible computing uses 0-1 pairs. Basically you do logic on the left one and swap them to invert that logical bit. Because the charge is simply transmitted, instead of grounded away or powered up, negligible entropy is involved.
I suspect you’d have to sink the values eventually… but I expect you could send them off “behind the woodshed” for that. Do all the work in some tiny flake of silicon, then transmit a stream of noise to a big dumb block of metal.
My dad had one of the first consumer 3GHz chips available. By the time I inherited it in 2009 it was completely outclassed by a <2GHz dual-core laptop.
That would’ve been a single 3ghz cpu core. Now we have dozens in one chip. Also, the instruction sets and microcode has gotten way better since then as well.
Clock speed isn’t improving that quickly anymore. Other aspects, such as more optimized power consumption, memory speeds, cache sized, less cycle-demanding operations, more cores have been improving faster instead.
We’re running into hard physical limits now, the transistors in each chip are so small that any smaller and they’d start running into quantumn effects that would render them unreliable.
I remember 20 years ago already seeing 3ghz CPUs, isn’t technology supposed to improve fast?
I remember when chips first hit 1GHz around 1999. Tech magazines were claiming that we’d hit 7GHz in 5 years.
What they failed to predict is that you start running into major heat issues if you try to go past ~3GHz. Which is why CPU manufacturers started focusing on other ways to improve performance, such as multiple cores and better memory management.
Just use the heat to power the machine.
Yeah, that’s how it works.
The ultimate fix remains unexplored: reversible computing uses 0-1 pairs. Basically you do logic on the left one and swap them to invert that logical bit. Because the charge is simply transmitted, instead of grounded away or powered up, negligible entropy is involved.
I suspect you’d have to sink the values eventually… but I expect you could send them off “behind the woodshed” for that. Do all the work in some tiny flake of silicon, then transmit a stream of noise to a big dumb block of metal.
And it has. The phone you have is faster than the 3GHz chip back then. A phone powered by a battery. And faster by like 20 times.
My dad had one of the first consumer 3GHz chips available. By the time I inherited it in 2009 it was completely outclassed by a <2GHz dual-core laptop.
That would’ve been a single 3ghz cpu core. Now we have dozens in one chip. Also, the instruction sets and microcode has gotten way better since then as well.
Clock speed isn’t improving that quickly anymore. Other aspects, such as more optimized power consumption, memory speeds, cache sized, less cycle-demanding operations, more cores have been improving faster instead.
We’re running into hard physical limits now, the transistors in each chip are so small that any smaller and they’d start running into quantumn effects that would render them unreliable.