I was recently reading Tracy Kidder’s excellent book Soul of a New Machine.
The author pointed out what a big deal the transition to 32-Bit computing was.
However, in the last 20 years, I don’t really remember a big fuss being made of most computers going to 64-bit as a de facto standard. Why is this?
Well we are still kind of transitioning to 64-bit. Chip makers/designers are still having to include instruction sets for 16/32-bit which does limit in some respects on how big the transition really is.
Removal of those would break a lot of software, especially removal of 32bit support. Bye, bye thousands (if not millions) of Windows 95/98/XP games & programs!
One of the big features of Windows is its backwards compatibility.
Gee really? Here’s me thinking 32-bit instruction sets were cosmetic. Thank you for ignoring the part where I said we’re still in a transition phase.
Also, with a bit of tinkering, you can run 16-bit applications. It’s just recommend to use virtualisation applications because Microsoft doesn’t ensure quality updates for 16 bit applications.
Point is that seldom used instructions are microcoded anyway, so they take zero space on the CPU.
I honestly don’t know when you are talking about. CPUs aren’t storage devices.