• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: October 25th, 2023

help-circle



  • titanking4@alien.topBtoIntel@hardware.watchDid anyone here go AMD?
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    god I love that both of these companies are basically tied in performance and value.

    Socket lifespan is actually only relevant if you plan on upgrading within that lifespan.

    If you buy into the 7800X3D right now, you will basically 100% have Zen5 to upgrade too, with a high likelihood of having Zen6.
    Past Zen6 however it would be a mystery, but I wouldn’t count on it.
    Which means that if you see yourself upgrading in 2 generations (having one generation gap) then the path makes sense.

    But your current CPU is a 9700K and you are going 5 generations in 1 swoop. Even assuming you discount 10th and 11th gen due to them being lacklustre over 9th, you still are jumping 3 gens.

    If it’s cheaper then go for it, but I’d say to stay away from the 7900X3D and 7950X3D as the multi-die X3D scheduling isn’t always correct.
    AM5 also has a lot more weirdness with DDR5 and memory overall, but the nice thing about the X3D parts is that they are less sensitive to memory performance because of the massive cache.

    Being able to get your hands on a future Zen6 part also sounds pretty good too.


  • Yes because what Nvidia is doing isn’t super special. Of course AMD will have an equivalent or better solution, so the question really should be “how many years behind” will AMD be.

    They closed the gap significantly in raster perf. Power efficiency is pretty close, and so is area efficiency. AI mostly a software problem and AMD aren’t blind to this, and are very clearly investing a ton more into software to close this gap. (They just bought Node AI and absorbed all their talent)

    The hardware is arguably better in many aspects. MI200 and MI250 are HPC monsters and MI300 is a chiplet packaging masterpiece that has has HPC performance on lockdown.

    There’s a reason that no new HPC super computers are announced with Nvidia GPUs.

    Nvidia has lead in AI, AMD has lead in HPC. Nvidia has lead in area efficiency, AMD has lead in packaging expertise (which means they can throw a ton more area at the problem with the same cost of Nvidia)


  • M1 kinda was the “second coming of Christ” in regards to many efficiency metrics in the notebook space.

    It’s idle efficiency and power usage doing tasks like local video playback, or Microsoft teams completely set a new bar for the likes of intel and AMD. Both of stem still haven’t matched the original M1 in those efficiency metrics.

    They certainly caught up in performance metrics and perf/watt under load, but not in the “lowest power usage possible” metric.

    Even Apples LPDDR memory PHY is more efficient than Intel or AMDs, because Apple is bigger than both of them and has THE best low-power engineers on the planet.

    The CPU cores Apple makes are great, but they are quite large area wise and intel and AMD can compete pretty well at making a core.

    Their SoCs are best in class however. When you set the bar that high with M1, there isn’t really all that much room to improve in the SoC, and what’s left are the cores themselves where Apple is going to be innovating similarly to AMD and Intel.

    M1 was Apples equivalent to AMDs Zen1. A fresh start product where every low hanging fruit was implemented resulting in a massive gains over its predecessor.