• AutoTL;DRB
    link
    fedilink
    English
    110 months ago

    This is the best summary I could come up with:


    Naveen Rao, a neuroscientist turned tech entrepreneur, once tried to compete with Nvidia, the world’s leading maker of chips tailored for artificial intelligence.

    At a start-up that the semiconductor giant Intel later bought, Mr. Rao worked on chips intended to replace Nvidia’s graphics processing units, which are components adapted for A.I.

    On Wednesday, Nvidia — which has surged past $1 trillion in market capitalization to become the world’s most valuable chip maker — is expected to confirm those record results and provide more signals about booming A.I.

    He announced software technology called CUDA, which helped program the GPUs for new tasks, turning them from single-purpose chips to more general-purpose ones that could take on other jobs in fields like physics and chemical simulations.

    Pricing “is one place where Nvidia has left a lot of room for other folks to compete,” said David Brown, a vice president at Amazon’s cloud unit, arguing that its own A.I.

    He has also started promoting a new product, Grace Hopper, which combines GPUs with internally developed microprocessors, countering chips that rivals say use much less energy for running A.I.


    The original article contains 1,453 words, the summary contains 184 words. Saved 87%. I’m a bot and I’m open source!

  • @PeterPoopshit
    link
    1
    edit-2
    10 months ago

    It seems self hosted ai always needs at least 13gb of vram. Any gpu that has more than 12gb of vram is conveniently like $1k per gb for every gb of vram, beyond 12gb (sort of like how any boat longer than 18 feet usually costs $10k per foot for any every foot of length beyond 18ft). There are projects that do it all on cpu but still, ai gpu stuff is bullshit.