x86 came out 1978,

21 years after, x64 came out 1999

we are three years overdue for a shift, and I don’t mean to arm. Is there just no point to it? 128 bit computing is a thing and has been in the talks since 1976 according to Wikipedia. Why hasn’t it been widely adopted by now?

  • GomaEspumaRegional@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Because there is no need from an address space or compute standpoint.

    to understand how large 128bit memory space really is; you’d need a memory size larger than all the number of atoms in the solar system

    In the rare cases where you need to deal with a 128bit integer or floating, you can do it in software with not that much overhead by concatenating registers/ops. There hasn’t been enough pressure in terms of use cases that need 128bit int/fp precision for manufacturers to invest the resources in die area to add direct HW support for it.

    FWIW there have been 64bit computers since the 60s/70s.

  • roninIB@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I think what you need to know, in layman terms, is that 128bit is not the double of 64bit. 65bit is double the amount of 64bit.

    128bit is an absurd huge amount. And 64 is so much that even I as a radar engineer do not have to worry about it for a second.

  • plebbitier@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    There have been a number of 128bit systems over the years.
    As it is, 64bit should be good for the life of x86

  • ET3D@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Lots of good responses regarding why 128-bit isn’t a thing, but I’d like to talk about something else.

    Extrapolating from two data points is a folly. It simply can’t work. You can’t take two events, calculate the time between them, and then assume that another event will happen after the same amount of time.

    Besides, your points are wrong. (Edit: That also has been mentioned in another response.)

    x86 (8086) came out in 1978 as a 16-bit CPU. 32-bit came with the 386 in 1985. x64, although described in 1999, was released in 2003.

    So now you have three data points: 1978 for 16-bit, 1985 for 32-bit and 2003 for 64-bit. Differences are 7 years and 18 years.

    Not that extrapolating from 3 points is good practice, but at least it’s more meaningful. You could, for example, conclude that it took about 2.5 times more to move from 32-bit to 64-bit than it did from 16-bit to 32-bit. Multiply 18 years by 2.5 and you get 45 years. So the move from 64-bit to 128-bit would be expected in 2003+45 = 2048.

    This is nonsense, of course, but at least it’s a calculation backed by some data (which is still rather meaningless data).

  • johnklos@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is a bit pedantic, but x64 refers to Alpha, which existed long before 1999. 64 bit x86 (x86-64, or amd64) wasn’t purchasable until 2003, although it was announced in 2000.

    There were several additional shifts between 1978 and 2003:

    • 8088 / 8086 has what’s essentially bank switched 16 bit addressing which gives 1 MB, or 2^20 bytes
    • 80286 has physical support for 16 megs, or 2^24 bytes
    • 80386 has physical support 4 gigs, or 2^32 bytes
    • Pentium Pro has PAE support for 64 gigs, or 2^36 bytes
    • AMD Opteron from 2003 has support for 1024 gigs, or 1 terabyte, or 2^40 bytes
    • Current AMD and Intel CPUs physically support anywhere between 2^48 and 2^57 bytes of physical hardware (256 terabytes to 128 petabytes)

    But let’s just use three points of data: 8086 / 8088, 80386, and let’s say the first 64 bit AMD Opteron supports 64 bits:

    • 8086 / 8088, 1978, 20 bits
    • 80386, 1985, 32 bits
    • AMD Opteron, 2003, 64 bits

    1978 to 1985 is 7 years, with a change in addressing of 12 bits, or about .6 bits per year.

    1985 to 2003 is 18 years, with a change in addressing of 32 bits, or about .56 bits per year. So far, pretty consistent.

    How long would it take to go from 64 bits to 128 bits? At around .56 bits per year, that’d be about 114 years, and we’ve had twenty so far.

    Check back in 94 years.

  • dont_roast_me@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Did op just look at two number’s pattern and then looked at the date difference and whipped up this conclusion?

  • PolyDipsoManiac@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    How much memory can you address with 64 bits versus 32 bits? Are we approaching devices with that capacity yet?

  • cloud_t@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Let me put it this way: computing is evolving in a way where SMALLER registers are actually more important for new types of algorithmical necessities. AI/ML is a great example - you have to program in specialty frameworks such as CUDA or Tensorflow which want to have registers as small as 8bit so that things are done faster, in the GPU or in L1/2 cache. The hardware of GPUs for instance is made with 8 and 16b logical processing units in mind.

    Larger registers only really help a portion of computing, while you can emulate the odd large register you may need without affecting performance THAT much with a combination of smaller registers.

  • Fireline11@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    If it took 21 years to go from 32-bit to 64-bit, imagine it will take about 21^2 = 441 years to go from 64-bit to 128-bit. This is because 2^128 / 2^64 is the square of 2^64 / 2^32.

    • auradragon1@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Technology moves at an exponential pace. The time it took to go from 8 bit to 16 bit to 32 bit to 64 bit got shorter and shorter.

  • kaszak696@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    What would even be the point, how would you justify the expense of making modern CPUs 128bit (transistors and R&D ain’t free)? We aren’t anywhere near the limit of 64bit addressing and won’t reach that point for decades, modern “64-bit” consumer CPUs don’t even bother going for full 64-bit addressing space, since it wouldn’t be of use anyway. For arithmetic, it’s easily doable without any architectural changes.