Meta on Tuesday announced the release of Llama 3.1, the latest version of its large language model that the company claims now rivals competitors from OpenAI and Anthropic. The new model comes just three months after Meta launched Llama 3 by integrating it into Meta AI, a chatbot that now lives in Facebook, Messenger, Instagram and WhatsApp and also powers the company’s smart glasses. In the interim, OpenAI and Anthropic already released new versions of their own AI models, a sign that Silicon Valley’s AI arms race isn’t slowing down any time soon.

Meta said that the new model, called Llama 3.1 405B, is the first openly available model that can compete against rivals in general knowledge, math skills and translating across multiple languages. The model was trained on more than 16,000 NVIDIA H100 GPUs, currently the fastest available chips that cost roughly $25,000 each, and can beat rivals on over 150 benchmarks, Meta claimed.

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 months ago

    I’m not sure what entities, motivations, qualifications, connections underpin Lex Fridman with his podcasts/YT channel, but he has interviewed many people in AI including Zuckerberg, Altmann, and Musk. His interviews with Yann LeCunn are quite interesting. Professor LeCunn is the head of Meta AI. His longer interviews are much better in total for showing the lay of the land overview perspective. Some little clip does not do justice to the overall points taken in context, but telling you to go watch an hour long interview to get the answer directly does not work either.

    https://www.youtube.com/watch?v=fshIOoTo40E

    This is a 4min clip of LeCunn saying, basically it doesn’t hurt anyone. He’s essentially implying it will hurt OpenAI or any proprietary.

    I was trying to find the interview where Lex and Yann talk about the leaked Google memo last year, because that one is really good, but YT seems to be obfuscating that one intentionally in search results.

    IIRC, in that one, LeCunn was saying something to the effect of the only way people can really trust AI is with transparency and that requires open source as a foundation. Using something like OpenAI in business is insane. You’re basically selling every aspect of your company to Altmann for peanuts. Likewise with personal use, this is like your life long psychiatrist opening a few side businesses as a political analyst, insurance broker, banker, and healthcare insurance provider, while working nights as a Judge. While you’re asked to sign away any privacy or confidentiality. Models turn human language and culture into a statistical math problem that has far better than 50% probabilities in nearly any aspect of human existence. If you ask a model to give a profile for Name-1, it will tell you all kinds of seemingly unrelated things about the person. The more you interact, the more accurate this profile becomes, even in areas that make no sense, have no logical association, and were never a part of the conversation. It is the key to manipulating people unlike any other tool in history. That is why open source offline AI is the only sensible way to use AI.