Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

  • 31337OP
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    Spending is definitely looks exponential at the moment:

    Most breakthroughs have historically been made by university researchers, then put into use by corporations. Arguably, including most of the latest developments,. But university researchers were never going to get access to the $100 million in compute time to train something like GPT-4, lol.

    The human brain has 100 trillion connections. GPT-4 has 1.76 trillion parameters (which are analogous to connections). It took 25k GPUs to train, so in theory, I guess it could be possible to train a human-like intelligence using 1.4 million GPUs. Transformers (the T in GPT) are not like human brains though. They “learn” once, then do not learn or add “memories” while they’re being used. They can’t really do things like planning either. There are algorithms for “lifelong learning” and planning, but I don’t think they scale to such large models, datasets, or real-world environments. I think there needs to be a lot theoretical breakthroughs to make AGI possible, and I’m not sure if more money will help that much. I suppose AGI could be achieved by trial and error (i.e. trying ideas and testing if they work without mathematically proving if or how well they’d work) instead of rigorous theoretical work.

    • Wanderer@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Interesting. Thanks for posting.

      So you’re saying we might see something 1/10 of a human brain (obviously I understand that’s a super rough estimate) next year.

      This is the first I heard about GPT not learning. So if I interact with chat gpt it’s effectively a finished product and it will stay like that forever even if it is wrong and I correct it multiple times?

      This is where I’m really confused with the analogue. If GPT is not really close to a human brain how is it able to interact with so many people instantly. I couldn’t hold 3 conversations never mind a million. Yet my brain power is much much higher than GPT. Couldn’t it just talk to 1 person and be smarter as it can use all the computing power for that 1 conversation?

      • 31337OP
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        11 months ago

        Correct, when you talk to GPT, it doesn’t learn anything. If you’re having a conversation with it, every time you press “send,” it sends the entire conversation back to GPT, so within a conversation it can be corrected, but remembers nothing from the previous conversation. If a conversation becomes too long, it will also start forgetting stuff (GPT has a limited input length, called the context length). OpenAI does periodically update GPT, but yeah, each update is a finished product. They are very much not “open,” but they probably don’t do a full training between each update. They probably carefully do some sort of “fine-tuning” along with reinforcement-learning-with-human-feedback, and probably some more tricks to massage the model a bit while preventing catastrophic forgetting.

        Oh yeah, the latency of signals in the human brain is much, much slower than the latency of semiconductors. Forgot about that. That further muddies the very rough estimates. Also, there are multiple instances of GPTs running, not sure how many. It’s estimated that each instance “only” requires 128 GPUs during inference (responding to chat messages), as opposed to 25k gpus for training. During training, the model needs to process multiple training examples at the same time for various reasons, including to speed up training, so more GPUs are needed. You could also think of it as training multiple instances at the same time, but combining what’s “learned” into a single model/neural network.

        • Wanderer@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          11 months ago

          This is really cool. Thanks for taking the time. Confusing but the good kind.

          I’m just using this to info to then try and extrapolate.

          I understand the growth of moores law and such. But the efficiency I was talking about seems almost like 1 exponential jump on an exponential curve.

          Let’s just say for argument sake that meta makes AGI next year with 350,000 GPUs it would only need 2,000 GPU’s to make use of what it’s built. That’s pretty mind-boggling. That really is singularity sort of talking.

          So in your mind AGI when? And ASI when? You working in this field?

          • 31337OP
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            Yeah, those GPU estimates are probably correct.

            I specialized in ML during grad school, but only recently got back into it and keeping up with the latest developments. Started working at a startup last year that uses some AI components (classification models, generative image models, nothing nearly as large as GPT though).

            Pessimistic about the AGI timeline :) Though I will admit GPT caught me off guard. Never thought a model simply trained to predict the next word in a sequence of text would capable of what GPT is (that’s all GPT does BTW, takes a sequence to text and predicts what the next token should be, repeatedly). I’m pessimistic because, AFAIK, there isn’t really a ML/AI architecture or even a good theoretical foundation that could achieve AGI. Perhaps actual brain simulation could, but I’m guessing that is very inefficient. My wild-ass-guess is AGI in 20 years if interest and money stays consistent. Then ASI like a year after, because you could use the AGI to build ASI (the singularity concept). Then the ASI will turn us into blobs that cannot scream, because we won’t have mouths :)

            • Wanderer@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              Yea I had a feeling it was still a long way away. At least the media will get bored of it in a year and only the big breakthroughs will make it.

              But I think there will still be a lot of “stupid” yet impressive developments like GPT. It appears smart but isn’t that smart. Sure there will be other things.

              It’s the same as the manufacturing developments. Only now are we beginning to build things similar to the complexity of a human in limited functions. But that doesn’t mean the machines we have built haven’t put millions of people out of work, we just changed manufacturing to better utilise the stupid things they can do much faster and accurately than we can and made a better product because of it. I found out about a year ago we couldn’t make a Saturn v rocket now even if we had all the money in the world. The ability of man has been lost. The way they did the machining of the rockets and the welding and things like that, no one alive has that ability anymore. Robots can’t do it either. But the rockets we make now are more accurate that the ones made in the 60’s. It’s just done differently.

      • Miaou@jlai.lu
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        You’re confused by the analogie because it’s a shitty one. If we wanted to reproduce the behaviour of the human, we would invest in medecin, not computer science