• Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    140
    arrow-down
    6
    ·
    edit-2
    1 month ago

    Everybody in the know, knows that x86 64 bit was held back to push Itanium, Intel was all about market segmentation, which is also why Celeron was amputated on for instance RAM compared to Pentium.
    Market segmentation has a profit maximization motive. You are not allowed to use cheap parts for things that you are supposed to buy expensive parts for. Itanium was supposed to be the only viable CPU for servers, and keeping x86 32 bit was part of that strategy.
    That AMD was successful with 64 bit, and Itanium failed was Karma as deserved for Intel.

    Today it’s obvious how moronic Intel’s policy back then was, because even phones got 64 bit CPU’s too back around 2009.
    32 bits is simply too much of a limitation for many even pretty trivial tasks. And modern X86 chips are in fact NOT 64 bit anymore, but hybrids that handle tasks with 256 bits routinely, and some even with 512 bits, with instruction extensions that have become standard on both Intel and AMD

    When AMD came with Ryzen Threadripper and Epyc, and prices scaled very proportionally to performance, and none were artificially hampered, it was such a nice breath of fresh air.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      1 month ago

      And modern X86 chips are in fact NOT 64 bit anymore, but hybrids that handle tasks with 256 bits routinely, and some even with 512 bits, with instruction extensions that have become standard on both Intel and AMD

      On a note of technical correctness: That’s not what the bitwidth of a CPU is about.

      By your account a 386DX would be an 80-bit CPU because it could handle 80-bit floats natively, and the MOS6502 (of C64 fame) a 16-bit processor because it could add two 16-bit integers. Or maybe 32 bits because it could multiply two 16-bit numbers into a 32-bit result?

      In reality the MOS6502 is considered an 8-bit CPU, and the 386 a 32-bit one. The “why” gets more complicated, though: The 6502 had a 16 bit address bus and 8 bit data bus, the 368DX a 32 bit address and data bus, the 368SX a 32 bit address bus and 16 bit external data bus.

      Or, differently put: Somewhere around the time of the fall of the 8 bit home computer the common understanding of “x-bit CPU” switched from data bus width to address bus width.

      …as, not to make this too easy, understood by the instruction set, not the CPU itself: Modern 64 bit processors use pointers which are 64 bit wide, but their address buses usually are narrower. x86_64 only requires 48 bits to be actually usable, the left-over bits are required to be either all ones or all zeroes (enforced by hardware to keep people from bit-hacking and causing forwards compatibility issues, 1/0 IIRC distinguishes between user vs. kernel memory mappings it’s been a while since I read the architecture manual). Addressable physical memory might even be lower, again IIRC. 248B are 256TiB no desktop system can fit that much, and I doubt the processors in there could address it.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 month ago

        By your account a 386DX would be an 80-bit

        And how do you figure that? The Intel 80386DX did NOT have any 80 bit instructions at all, the built in math co-processor came with i486. The only instructions on a 80386DX system that would be 80 bit would be to add a 80387 math co-processor.

        But you obviously don’t count by a few extended instructions, but by the architecture of the CPU as a whole. And in that regard, the Databus is a very significant part, that directly influence the speed and number of clocks of almost everything the CPU does.

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          The Intel 80386DX did NOT have any 80 bit instructions at all, the built in math co-processor came with i486.

          You’re right, I misremembered.

          And in that regard, the Databus is a very significant part, that directly influence the speed and number of clocks of almost everything the CPU does.

          For those old processors, yes, that’s why the 6502 was 8-bit, for modern processors, though? You don’t even see it listed on spec sheets. Instead, for the external stuff, you see number of memory controllers and PCIe lanes, while everything internal gets mushed up in IPC. “It’s wide enough to not stall the pipeline what more do you want” kind of attitude.

          Go look at anything post-2000: 64 bit means that pointers take up 64 bits. 32 bits means that pointers take up 32 bits. 8-bit and 16-bit are completely relegated to microcontrollers, I think keeping the data bus terminology, and soonish they’re going to be gone because everything at that scale will be RISC-V, where “RV32I” means… pointers. So does “RV64I” and “RV128I”. RV16E was proposed as an April Fool’s joke and it’s not completely out of the question that it’ll happen. In any case there won’t be RV8 because CPUs with an 8-bit address bus are pointlessly small, and “the number refers to pointer width” is the terminology of <currentyear>. An RV16 CPU might have a 16 bit data bus, it might have an 8 bit data bus, heck it might have a 256bit data bus because it’s actually a DSP and has vector instructions. Sounds like a rare beast but not entirely nonsensical.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            You don’t even see it listed on spec sheets.

            Doesn’t mean it’s any less important, it’s just not a good marketing measure,because average people wouldn’t understand it anyway, and it wouldn’t be correct to measure by the Databus alone.
            As I stated it’s MORE complex today, not less, as the downvoters of my posts seem to refuse to acknowledge. The first Pentium had a 64 bit Databus for a 32 bit CPU. Exactly because data transfer is extremely important. The first Arm CPU was designed around as fast RAM access/management as possible, and it beat the 386 by several factors, with a tenth the transistors.

            Go look at anything post-2000: 64 bit means that pointers take up 64 bits. 32 bits means that pointers take up 32 bits.

            Although true, this is a very simplistic way to view it, and not relevant to the actual overall bitwith of the CPU, as I’ve tried to demonstrate, but people apparently refuse to acknowledge.
            But bit width of the Databus is very important, and it was debated heavily weather it was even legal to market the M68008 Sinclair QL as a 32 bit computer, because it only had an 8 bit databus.

            But as I stated other factors are equally important, and the decoder is way more important than the core instruction set, and modern higher end decoders operate at 256 bit or more, allowing them to decode multiple ( 4 ) instructions per cycle, again allowing each core to execute multiple instructions per clock, in 2 threads. Without that capability, each core would only be about a third as fast.
            To claim that the instruction set determines bit wdth is simplistic, and also you yourself argued against it, because that would mean an i486 would be an 80 bit CPU. And obviously todays CPU’s would be 512 bit, because they have 512 bit instructions.

            Calling it 64 bit is exclusively meant to distinguish newer CPU’s from older 32 bit CPUS, and we’ve done that since the 90’s, claiming that new CPU architectures haven’t increased in bit width for 30 years is simply naive and false, because they have in many more significant ways than the base instruction set.

            Still I acknowledge that an AARCH64 or AMD64 or i64 CPU are generally called 64 bit, it was never the point to refute that. Only that it’s a gross simplification of what modern CPU’s have become, and that it’s not technically correct.

            Let me finish with a question:
            With a multi-core CPU where each core is let’s just say 64 bit, how many bits is the whole CPU package? Which is what we call the “CPU” today, when saying CPU we are not generally talking about the individual cores, because then it would have to be plural.

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 month ago

              As I stated it’s MORE complex today, not less, as the downvoters of my posts seem to refuse to acknowledge.

              The reason you’re getting downvoted is because you’re saying that “64-bit CPU” means something different than is universally acknowledged that it means. It means pointer width.

              Yes, other numbers are important. Yes, other numbers can be listed in places. No, it’s not what people mean when they say “X-bit CPU”.

              claiming that new CPU architectures haven’t increased in bit width for 30 years is simply naive and false, because they have in many more significant ways than the base instruction set.

              RV128 exists. It refers to pointer width. Crays existed, by your account they were gazillion-bit machines because they had quite chunky vector lengths. Your Ryzen does not have a larger “databus” than a Cray1 which had 4096 bit (you read that right) vector registers. They were never called 4096 bit machines, they Cray1 has a 64-bit architecture because that’s the pointer width.

              Yes, the terminology differs when it comes to 8 vs. 16-bit microcontrollers. But just because data bus is that important there (and 8-bit pointers don’t make any practical sense) doesn’t mean that anyone is calling a Cray a 4096 bit architecture. You might call them 4096 bit vector machines, and you’re free to call anything with AVX2 a 256-bit SIMD machine (though you might actually be looking at 2x 128-bit ALUs), but neither makes them 64-bit architectures. Why? Because language is meant for communication and you don’t get to have your own private definition of terms: Unless otherwise specified, the number stated is the number of bits in a pointer.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                1 month ago

                It means pointer width.

                https://en.wikipedia.org/wiki/64-bit_computing

                64-bit integers, memory addresses, or other data units[a] are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size.

                It also states Address bus, but as I mentioned before, that doesn’t exist. So it boils down to instruction set as a whole requiring 64 bit processor registers and Databus.
                Obviously 64 bits means registers are 64 bit, the addresses are therefore also 64 bit, otherwise it would require type casting every time you need to make calculations on them. But it’s the ability to handle 64 bit registers in general that counts, not the address registers. which is merely a byproduct.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 month ago
                  1. The whole article overall lacks sources.
                  2. That section is completely unsourced.
                  3. It doesn’t say what you think it says.

                  You were arguing the definition of “X-bit CPU”. We’re not talking about “X-bit ALU”. It’s also not up to contention that “A 64-bit integer is 64 bit wide”. So, to the statement:

                  Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size.

                  This does not say which of “processor register, address buses, or data buses” applies to CPU and which to ALU.

                  Obviously 64 bits means registers are 64 bit, the addresses are therefore also 64 bit,

                  Having 64 bit registers doesn’t necessitate that you have 64 bit addresses. It’s common, incredibly common, for the integer registers to match the pointer width but there’s no hard requirement in theory or practice. It’s about as arbitrary a rule as “Instruction length must be wider than the register size”, so that immediate constants fit into the instruction stream, makes sense doesn’t it… and then along come RISC architectures and split load immediate instructions into two.

                  otherwise it would require type casting every time you need to make calculations on them

                  Processors don’t typecast. Please stop talking.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 month ago

                It means pointer width.

                Where did you get that from? Because that’s false, please show me dokumentation for that.
                64 bit always meant the ability to handle 64 bit wide instructions, and because the architecture is 64 bit, the pointers INTERNALLY are 64 bit, but effectively they are only for instance 40 bit when accessing data.
                Your claim about pointer width simply doesn’t make any sense.
                That the CPU should be called by a single aspect they can’t actually handle!!! That’s moronic.

                • barsoap@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 month ago

                  That the CPU should be called by a single aspect they can’t actually handle!!! That’s moronic.

                  People literally use the word “literally” to mean figuratively. It doesn’t make any sense. One might even call it moronic.

                  But it’s the way it’s done. Deal with it.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        16
        ·
        edit-2
        1 month ago

        By your account a 386DX would be an 80-bit CPU because it could handle 80-bit floats natively,

        No that’s not true, it’s way way more complex than that, some consider the data bus the best measure, another could be decoder. I could also have called a normal CPU bitwidth as depending on how many cores it has, each core handling up to 4 instructions per cycle, could be 256 bit, with an average 8 core CPU that would be 2048 bit.

        There are several ways to evaluate like Databus, ALU, Decoder etc, but most ways to measure it reasonably hover around the 256 bit, and none below 128 bit.
        There is simply no reasonable way to argue a modern Ryzen CPU or Intel equivalent is below 128 bit.

        • sugar_in_your_tea
          link
          fedilink
          English
          arrow-up
          20
          arrow-down
          1
          ·
          1 month ago

          There is simply no reasonable way to argue a modern Ryzen CPU or Intel equivalent is below 218 bit.

          There absolutely is, and the person you responded to made it incredibly clear: address width. Yeah, we only use 48-bit addresses, but addresses are 64-bit, and that’s the key difference that the majority of the market understands between 32-bit and 64-bit processors. The discussion around “32-bit compatibility” is all about address size.

          And there’s also instruction size. Yes, the data it operates on may be bigger than 64-bit, but the instructions are capped at 64-bit. With either definition, current CPUs are clearly 64-bit.

          But perhaps the most important piece here is consumer marketing. Modern CPUs are marketed as 64-bit (based on both of the above), and that’s what the vast majority of people understand the term to mean. There’s no point in coming up with another number, because that’s not what the industry means when they say a CPU is 64-bit or 32-bit.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            16
            ·
            edit-2
            1 month ago

            Edited for clarity

            ddresses, but addresses are 64-bit

            You are stating the register width, which is irrelevant to the width of the address bus. But that doesn’t make a shred of sense. it’s like claiming a road is 40000 km long around the globe, it’s just not finished, so you can only drive on a few km of it. The registers are 64 bit, but “only” 40 can be used. Enough to address 1 Terabyte of RAM.

            If you want to measure by Address width we don’t have a single 64 bit CPU, because there doesn’t exist a 64 bit CPU that has a 64 bit Address bus.

            • sugar_in_your_tea
              link
              fedilink
              English
              arrow-up
              22
              arrow-down
              1
              ·
              1 month ago

              no CPU has ever been called by the width of the address bus EVER.

              Yes they have, and that’s what the vast majority of people mean when they say a CPU is 32-bit or 64-bit. It was especially important in the transition from 32-bit to 64-bit because of all the SW changes that needed to be made to support 64-bit addresses. It was a huge thing in the early 2000s, and that is where the nomenclature comes from.

              Before that big switch, it was a bit more marketing than anything else and frequently referred to the size of the data the CPU operated on. But during and after that switch, it shifted to address sizes, and instructions (not including the data) are also 64-bit. The main difference w/ AVX vs a “normal” instruction is the size of the registers used, which can be up to 512-bit, vs a “normal” 64-bit register. But the instruction remains 64-bit, at least as far as the rest of the system is concerned.

              Hence why CPUs are 64-bit, all of the interface between the CPU and the rest of the system is with 64-bit instructions and 64-bit addresses. Whether the CPU does something fancy under the hood w/ more than 64-bits (i.e. registers and parallel processing) is entirely irrelevant, the interface is 64-bit, therefore it’s 64-bit.

              • Buffalox@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                10
                ·
                edit-2
                1 month ago

                yes they have, and that’s what the vast majority of people mean when they say a CPU is 32-bit or 64-bit

                Nobody ever called the purely 8 bit Motorola M6800, MOSTech 6502, Zilog Z80, ot the Intel 8080 16 bit computers for having a 16 bit address bus. They were 8 bit instruction and data bus, and were called 8 bit chips. The purely 16 bit Intel 8086 wasn’t called a 20 bit CPU for having a 20 bit Address bus, it was called a 16 bit CPU for having 16 bit instruction set and databus. Or the Motorola M68000 a 24 bit CPU for having a 24 bit adress bus, it was a 32 bit CPU for having a 32 bit instruction set.

                I have no idea how you are upvoted, because your claim tha CPUs are called by their address bus bit length is decidedly false.
                The most common is to use the DATA-bus or instruction set, and now also the instruction decoder and other things, because the complexity has evolved. But no 64 bit CPU has a 64 bit address bus, because that would be ridiculous.

                Back in the day, it was mostly instruction set, then it became instruction set / DATA-bus. Today it’s way way more complex, and we may call it x86-64, but that’s the instruction set, the modern x86-64 CPU is not 64 bit anymore. They are hybrids of many bit widths.

                Show me just ONE example of a CPU that was called by its address bus.

                https://people.ece.ubc.ca/edc/379.jan2000/lectures/lec2.pdf

                Tell me when 8086 and 8088 were called 20 bit CPU’s!!

                https://www.alldatasheet.com/datasheet-pdf/view/82483/MOTOROLA/MC6800.html

                The 6800 was an 8 bit CPU with 16 bit Adress bus as was the 6502/6510.

                https://en.wikipedia.org/wiki/Motorola_68000

                The 68000 is here correctly called 16/32 because it’a a 16 bit DATAbus and 32 bit instruction set.
                The Address bus is 24 bit, but never has a CPU been called 20 ot 24 bit because of their address bus, despite many 16 bit CPU’s have had address busses of that length.
                Incidentally, the MOS 6510 in the Commodore 64, had an extra 17th address bit, enabling it to use ROM and cartridges together with the 64 KB RAM. It would be absolutely ridiculous to call it either a 16 or 17 bit computer, and by no accepted standard would it be called that.

                • sugar_in_your_tea
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 month ago

                  Nobody ever called the purely 8 bit Motorola M6800

                  Sure, but that was a long time ago. Lithography marketing also used to make sense when it was actually based on real measurements, but times change.

                  All those chips you’re talking about were from >40 years ago. Times change.

                  Today it’s way way more complex, and we may call it x86-64, but that’s the instruction set, the modern x86-64 CPU is not 64 bit anymore.

                  Sure, yet when someone describes a CPU, we talk about the instruction set, so we talk about 32-bit vs 64-bit instructions. That’s how the terminology works.

                • tekato@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  2
                  ·
                  1 month ago

                  I guess you know more about hardware nomenclature Linux kernel developers, because they call modern Intel/AMD and ARM CPUs amd64 and aarch64, respectively.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      21
      ·
      1 month ago

      It was also a big surprise when Intel just gave up. The industry was getting settled in for a David v Goliath battle, and then Goliath said this David kid was right.

      • Buffalox@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 month ago

        Yes, I absolutely thought Intel would make their own, and AMD would lose the fight.
        But maybe Intel couldn’t do that, because AMD had patented it already, and whatever Intel did, it could be called a copy of that.

        Anyways it’s great to see AMD finally is doing well and finally is profitable. I just never expected Intel to fail as badly as they are? So unless they fight their way to profitability again, we may be in the same boat again as we were when Intel was solo on X86?

        But then again, maybe x86 is becoming obsolete, as Arm is getting ever more competitive.

        • frezik@midwest.social
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 month ago

          Right, I think the future isn’t Intel v AMD, it’s AMD v ARM v RISC-V. Might be hard to break into the desktop and laptop space, but Linux servers don’t have the same backwards compatibility issues with x86. That’s a huge market.

          • Patch@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Intel as a company isn’t going anywhere any time soon; they’re just too big, with too many resources, not to do at least OK.

            They have serious challenges in their approach and performance to engineering, but short of merging with someone else they’ll find their niche. For as long as x86-derived architectures remain current (i.e. if AMD is still chugging along with them) they’ll continue to put out their own chips, and occasionally they’ll manage to get an edge.

            The real question would be what happens if x86 finally ceases to be viable. In theory there’s nothing stopping Intel (or AMD) pivoting to ARM or RISC-V (or fucking POWER for that matter) if that’s where the market goes. Losing the patent/licensing edge would sting, though.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 month ago

      I hated that you had to choose, virtualization or overclocking so much. Among a lot of other forced limitation crap from intel.

      A bit like cheap mobile phones had a too small ssd and buying one at least “normal” sized bumped everything else (camera, cpu, etc) up too, including price ofc.

  • Technus@lemmy.zip
    link
    fedilink
    English
    arrow-up
    133
    ·
    1 month ago

    This highlights really well the importance of competition. Lack of competition results in complacency and stagnation.

    It’s also why I’m incredibly worried about AMD giving up on enthusiast graphics. I have very few hopes in Intel ARC.

    • barsoap@lemm.ee
      link
      fedilink
      English
      arrow-up
      32
      ·
      1 month ago

      I expect them to merge enthusiast into the pro segment: It doesn’t make sense for them to have large RDNA cards because there’s too few customers just as it doesn’t make sense for them to make small CDNA cards but in the future there’s only going to be UDNA and the high end of gaming and the low end of professional will overlap.

      I very much doubt they’re going to do compute-only cards as then you’re losing sales to people wanting a (maybe overly beefy) CAD or Blender or whatever workstation, just to save on some DP connectors. Segmenting the market only makes sense when you’re a (quasi-) monopolist and want to abuse that situation, that is, if you’re nvidia.

      • bruhduh@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 month ago

        True, in simple words, AMD is moving towards versatile solutions that is going to satisfy corporate clients and ordinary clients while producing same thing, their apu and xdna architecture is example, apu is used in playstation and Xbox, xdna and epyc used in datacenters, and AMD is uniting btb and btc merchandise for manufacture simplification

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          1 month ago

          I wonder, what is easier: Convincing data centre operators to not worry about the power draw and airflow impact of those LEDs on the fans, or convincing gamers that LEDs don’t make things faster?

          Maybe a bold strategy is in order: Buy cooling assemblies exclusively from Noctua, and exclusively in beige/brown.

          • bruhduh@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 month ago

            AMD making cases and fans? First time I’ve heard, even box versions with fans could be made apple way, they can start shipping only SOCs they selling

            • barsoap@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              There’s no non-reference designs of Radeon PROs, I think. Instincts, even less. If the ranges bleed into each other they might actually sell reference designs down into the gamer mid-range but I admit that I’m hand-waving. But if, as a very enthusiastic enthusiast, you’re buying something above the intended high-end gaming point and well into the pro region it’s probably going to be a reference design.

              And as a side note finally they’re selling CPUs boxed but without fan.

              • bruhduh@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                True you’ve got a point, but i don’t think there’s gonna be many reference designs simply because AMD is cutting expenses as much as possible by selling fabs in the past, simplifying merchandise lineup now, and i guess they’ll outsource as much as possible to non-reference manufacturers as a part of existence cutting measures, they also include outsourcing manufacture to tsmc in the past and opensourcing most of their software stack so community would step in to maintain

    • Alphane Moon@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      1 month ago

      They honestly seem to be done with high-end “enthusiast” GPUs. There is probably more money/potential for iGPUs and low/middle level products optimized for laptops.

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        3
        ·
        1 month ago

        Their last few generations of flagship GPUs have been pretty underwhelming but at least they existed. I’d been hoping for a while that they’d actually come up with something to give Nvidia’s xx80 Ti/xx90 a run for their money. I wasn’t really interested in switching teams just to be capped at the equivalent performance of a xx70 for $100-200 more.

        • TheGrandNagus@lemmy.world
          link
          fedilink
          English
          arrow-up
          28
          ·
          1 month ago

          The 6900XT/6950XT were great.

          They briefly beat Nvidia until Nvidia came out with the 3090 Ti. Even then, it was so close you couldn’t tell them apart with the naked eye.

          Both the 6000 and 7000 series have had cards that compete with the 80-class cards, too.

          The reality is that people just buy Nvidia no matter what. Even the disastrous GTX 480 outsold ATI/AMD’s cards in most markets.

          The $500 R9 290X was faster than the $1000 Titan, with the R9 290 being just 5% slower and $400, and yet AMD lost a huge amount of money on it.

          AMD has literally made cards faster than Nvidia’s for half the price and lost money on them.

          It’s simply not viable for AMD to spend a fortune creating a top-tier GPU only to have it not sell well because Nvidia’s mindshare is arguably even better than Apple’s.

          Nvidia’s market share is over 80%. And it’s not because their cards are the rational choice at the price points most consumers are buying at. It really cannot be stressed enough how much of a marketing win Nvidia is.

          • sugar_in_your_tea
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            1 month ago

            Yup, it’s the classic name-brand tax. That, and Nvidia also wins on features, like RTX and AI/compute.

            But most people don’t actually use those features, so most people seem to be buying Nvidia due to brand recognition. AMD has dethroned Intel on performance and price, yet somehow Intel remains dominant on consumer PCs, though the lead is a lot smaller than before.

            If AMD wants to take over Nvidia, they’ll need consistently faster GPUs and lower prices with no compromises on features. They’d have to invest a ton to get there, and even then Nvidia would probably sell better than AMD on name recognition alone. Screw that! It makes far more sense for them to stay competitive and suck up a bunch of the mid-range market and transition the low-end market to APUs. Intel can play at the low-mid range markets, and AMD will slot themselves as a bit better than Intel, and a better value than Nvidia.

            That said, I think AMD needs to go harder on the datacenter for compute, because that’s where the real money is, and it’s all going to Nvidia. If they can leverage their processors to provide a better overall solution for datacenter compute, they could translate that into prosumer compute devices. High end gaming is cool, but it’s not nearly as lucrative as datacenter. I would hesitate to make AI-specific chips, but instead make high quality general compute chips so they can take advantage of whatever comes after the current wave of AI.

            I think AMD should also get back into ARM and low-power devices. The snapdragon laptops have made a big splash, and that market could explode once the software is refined, and AMD should be poised to dominate it. They already have ARM products, they just need to make low-power, high performance products for the laptop market.

            • pycorax@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 month ago

              I think AMD should also get back into ARM and low-power devices. The snapdragon laptops have made a big splash, and that market could explode once the software is refined, and AMD should be poised to dominate it. They already have ARM products, they just need to make low-power, high performance products for the laptop market.

              They don’t need to go with ARM. There’s nothing inherently wrong with the x86 instruction set that prevents them from making low power processors, it’s just that it doesn’t make sense for them to build an architecture for that market since the margins for servers are much higher. Even then, the Z1 Extreme got pretty close to Apple’s M2 processors.

              Lunar Lake has also shown that x86 can match or beat Qualcomm’s ARM chips while maintaining full compatibility with all x86 applications.

              • sugar_in_your_tea
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 month ago

                it’s just that it doesn’t make sense for them to build an architecture for that market since the margins for servers are much higher

                Hence ARM. ARM already has designs for low power, high performance chips for smaller devices like laptops. Intel is chasing that market, and AMD could easily get a foot in the door by slapping their label on some designs, perhaps with a few tweaks (might be cool to integrate their graphics cores?). They already have ARM cores for datacenter workloads, so it probably wouldn’t be too crazy to try it out on business laptops.

          • fuckwit_mcbumcrumble@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 month ago

            The 6000 series from AMD were so great because they picked the correct process node. Nvidia went with the far inferior Samsung 8nm node over TSMCs 7. Yet Nvidia still kept up with AMD in most areas (ignoring ray tracing).

            Even the disastrous GTX 480 outsold ATI/AMD’s cards in most markets.

            The “disastrous” Fermi cards were also compute monsters. Even after the 600 series came out people were buying the 500 series over them because they performed so much better for the money. Instead of picking up a Kepler Quadra card in order to get double precision you could get a regular ass GTX 580 and do the same thing.

        • pycorax@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          1 month ago

          Were the 6000 series not competitive? I got a 6950 XT for less than half the price of the equivalent 3090. It’s an amazing card.

          • vithigar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            9
            ·
            1 month ago

            Yes, they were, and that highlights the problem really. Nvidia’s grip on mind share is so strong that AMD releasing cards that matched or exceeded at the top end didn’t actually matter and you still have people saying things like the comment you responded to.

            It’s actually incredible how quickly the discourse shifted from ray tracing being a performance hogging gimmick and DLSS being a crutch to them suddenly being important as soon as AMD had cards that could beat Nvidia’s raster performance.

          • JaY_III@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 month ago

            The 6000 series is faster in Raster but slower in Ray Tracing.

            Reviews have been primarily pushing cards based on RT since it has become available. nVidia has a much larger marketing budget than AMD, and ever since they have been able to leverage the fact they have the fastest Ray Tracing, AMD share has been noise diving.

      • pycorax@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Wouldn’t be the first time they did this though, I wouldn’t be surprised if they jump back into the high end once they’re ready.

    • chalupapocalypse@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago

      I don’t see this happening with both consoles using AMD, honestly I could see Nvidia going less hard on graphics and pushing more towards AI and other related stuff, and with the leaked prices for the 5000s they are going to price themselves out of the market

      • sunzu2@thebrainbin.org
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        1 month ago

        Crypto and AI hype destroyed the prices for gamers.

        I doubt we ate ever going back the

        I am on 5-10 years upgrade cycle now anyway. Sure new shiti is faster but shot from 2 gen ago is still going everything I need. New features like ray tracing are hardly even worth. Lime sure it is cool but what is the actually value proposition.

        If you bought hardware for raytecing, kinda Mehh.

        With that being said. Local LLM is a fun use-case

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Lack of competition results in complacency and stagnation.

      This is absolutely true, but it wasn’t the case regarding 64 bit x86. It was a very bad miscalculation, where Intel wanted bigger more profitable server marketshare.
      So Intel was extremely busy with profit maximization, so they wanted to sell Itanium for servers, and keep the x86 for personal computers.

      The result was of course that X86 32 bit couldn’t compete when AMD made it 64bit, and Itanium failed despite HP-Compaq killing the worlds fastest CPU at the time the DEC Alpha, because they wanted to jump on Itanium instead. But the Itanium frankly was an awful CPU based on an idea they couldn’t get to work properly.

      This was not complacency, and it was not stagnation in the way that Intel made actually real new products and tried to be innovative, but with the problem that the product sucked and was too expensive for what it offered.

      Why the Alpha was never brought back, I don’t understand? As mentioned it was AFAIK the worlds fastest CPU when it was discontinued?

      • Technus@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        so they wanted to sell Itanium for servers, and keep the x86 for personal computers.

        That’s still complacency. They assumed consumers would never want to run workloads capable of using more than 4 GiB of address space.

        Sure, they’d already implemented physical address extension, but that just allowed the OS itself to address more memory by enlarging the page table. It didn’t increase the virtual address space available to applications.

        The application didn’t necessarily need to use 4 GiB of RAM to hit those limitations, either. Dylibs, memmapped files, thread stacks, various paging tricks, all eat up the available address space without needing to be resident in RAM.

    • kubica@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Even successful companies themselves take care in not putting all eggs in one basket on anything they do. Having alternatives is a life saver. We should ensure that we have alternatives too.

  • TimeSquirrel@kbin.melroy.org
    link
    fedilink
    arrow-up
    65
    ·
    1 month ago

    This is like Kodak inventing the digital camera and then sitting on it for the next 20 years. Because it doesn’t use film. And Kodak is film.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      3
      ·
      edit-2
      1 month ago

      This is not entirely fair, Kodak invested a lot in digital photography, I personally bought a $1500 Kodak digital camera around 2002.
      But Kodak could not compete with Canon and other Japanese makers.

      To claim Kodak could have made more successful cameras earlier, is ignoring the fact that the technology to make the sensors simply wasn’t good enough early on, and would never have been an instant hit for whoever came first to market. Early cameras lacked badly in light sensitivity dynamics and sharpness/resolution. This was due to limitations in even world leading CMOS production capabilities back then, it simply wasn’t good enough, and to claim Kodak should have had the capability to leapfrog everybody doesn’t make it true.

      To claim Kodak could have beat for instance Canon and Sony, is ignoring the fact that those were companies with way more experience in the technologies required to refine digital photography.

      Even with the advantage of hindsight, I don’t really see a path that would have rescued Kodak. Just like typesetting is dead, and there is no obvious path how a typesetting company could have survived.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 month ago

        Kodak isn’t dead they’re just not dominating the imagining industry any more. They even multiplied, there’s now Kodak Alaris in addition to the original Kodak.

        Between them they still are dominating analogue film which still has its uses and it could even be said that if they hadn’t tried to get into digital they might’ve averted bankruptcy.

        There’s also horse breeders around which survived the invention of the automobile, and probably also a couple that didn’t because their investments into car manufacturing didn’t pan out. Sometimes it’s best to stick to what you know while accepting that the market will shrink. Last year they raised prices for ordinary photography film because they can’t keep up with demand, their left-over factories are running 24/7.

        • sugar_in_your_tea
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 month ago

          Sometimes it’s best to stick to what you know while accepting that the market will shrink

          I argue it’s always best to do that. A company dying doesn’t mean it failed, it just means it fulfilled its purpose. Investors should leave, not because the company is poorly run, but because other technologies are more promising. These companies shouldn’t go bankrupt, but merely scale back operations and perhaps merge with other companies to maintain economies of scale.

          I honestly really don’t like companies that try to do multiple things, because they tend to fail in spectacular ways. Do what you’re good at, fill your niche as best you can, and only expand to things directly adjacent to your core competency. If the CEO sees another market that they can capture, then perhaps the CEO should leave and go start that business, not expand the current business into that market.

        • Deluxe0293@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          as a former TKO on the Nexpress series, don’t sleep on Kodak’s presence in the commercial print manufacturing industry either. would love to still be on the shop floor to have an opportunity to run the Prosper inkjet web press.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          it could even be said that if they hadn’t tried to get into digital they might’ve averted bankruptcy.

          Now there’s an interesting thought. ;)

          There’s also horse breeders around which survived the invention of the automobile,

          Exactly, and retro film photography is making a comeback. Kind of like Vinyl record albums.

      • CosmicTurtle0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        That made me true but let’s not ignore the huge profit motive for Kodak to keep people on film. That was their money maker.

        They had an incentive to keep that technology out of the consumer market.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          They absolutely did, but they knew they couldn’t do that forever, because Moore’s law goes for CMOS too. film photography would end as a mainstream product, so they actually tried to compete both in digital photography, scanners, and photo printing.
          But their background was in chemical photo technologies, and they couldn’t transfer their know how in that, to be an advantage with the new technologies, even with the research they’d done and the strong brand recognition.

          • HakFoo@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Fujifilm successfully repositioned towards other chemistry. I know there’s that Eastman spinoff but why wasn’t it as successful?

            • Buffalox@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              Yes but Fuji branched out way earlier, and were huge on storage media already in the early 80’s.
              No doubt Fuji has done better. Fuji is a complex of more than 200 branches.

    • golli@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      1 month ago

      The concept you are describing is called Innovator’s Dilemma and imo the most recent example for it happening is with legacy car manufacturers missing the ev transition, because it would eat into their margins from ICE. But i am not sure if this is a good example for it.

      However imo it seems like a great example for what Steve Jobs describes in this video about the failure of Xerox. Namely that in a monopoly position marketing people drive product people out of the decision making forums. Which seems exactly the case here where the concerns of an engineer were overruled by the higher ups, because it didn’t fit within their product segmentation.

    • mindbleach
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Closer to RCA developing video on vinyl records in the mid-1960s and then playing with chemical formulas until after VHS launched. They had the right idea - they knew it’d be a big deal - it was totally within their interests - and they still let themselves get scooped. Repeatedly, in RCA’s case, since Laserdisc and Betamax also beat them to market.

      We came this close to Star Trek TOS episodes getting a home release as they aired. All they had to do was settle for half an hour per side. Y’know. Like any other vinyl record.

      • ayyy
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        This is a bit revisionist. They spent so much time tinkering because they just couldn’t get the costs to be reasonable enough that people would actually buy it.

        • mindbleach
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 month ago

          The discs produced a television signal directly, and were pressed the same way as any LP. Complexity and costs ballooned because RCA demanded comically small grooves. At an hour per side, any speck of dust would cause skipping, so the final product had a ridiculous semi-mechanized caddy system. This is despite tremendous difficulty even making grooves that small, with their metallic vinyl formula, let alone placing the zillion tiny pits which encode the signal.

          Compromising on play time would have relaxed all of those tolerances. The minimum-bullshit version of this technology would work like any other phonograph - disc spins on platter, arm follows groove, amplifier does its thing. There’s no laser tracking. There’s no decoding. There’s no serpentine tape transport. There’s no helical-scan witchcraft. It’s a higher frequency of wiggly line.

          All of this should have been dirt cheap. More electric than electronic. But in the absence of competition, they wanted to get it Just Right, and spent twenty years solving problems instead of avoiding problems. Minimum viable products change the world.

  • JakenVeina@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I decided to split the difference, by leaving in the gates, but fusing off the functionality. That way, if I was right about Itanium and what AMD would do, Intel could very quickly get back in the game with x86. As far as I’m concerned, that’s exactly what did happen.

    I’m sure he got a massive bonus for this decision, when all the suits realized he was right and he’d saved their asses. /s