x86 came out 1978,
21 years after, x64 came out 1999
we are three years overdue for a shift, and I don’t mean to arm. Is there just no point to it? 128 bit computing is a thing and has been in the talks since 1976 according to Wikipedia. Why hasn’t it been widely adopted by now?
I’ll answer your question with a question: What are doing that requires 128-bit computations?
After that, a follow up question: Is it so important you’re willing to cut your effective RAM in half to do it?
Why would it be cutting your effective RAM in half? I know very little about hardware/software architecture and all that.
Imagine we have an 8 bit (1 byte) architecture, so data is stored/processed in 8-bit chunks.
If our RAM holds 256 bits, we can store 32 pieces of data in that RAM (256/8).
If we change to a 16 bit architecture, that same physical RAM now only has the capacity to hold 16 values (256/16). The values can be significantly bigger, but we get less of them.
Bits don’t appear out of nothing, they do take physical space, and there is a cost to creating them. We have a tradeoff of the number of values to store vs the size of each value.
For reference, per chunk (or “word”) of data:
With 8 bits, we can hold 256 values.
With 64 bits, we can hold 18,446,744,100,000,000,000 values.
With 128 bits, we can hold 3,402,823,670,000,000,000,000,000,000,000,000,000,000 values.
(For X bits, it’s 2^X)
Maybe one day we’ll get there, but for now, 64 bits seems to be enough for at least consumer-grade computations.
Oh for fuck sake, I replied to a bot.
To the dev that’s spamming Lemmy with this garbage: You aren’t making Lemmy better. You’re actively making it a worse experience.