• casual_turtle_stew_enjoyer
    link
    fedilink
    arrow-up
    1
    ·
    7 months ago

    Ok, so you’ve got some good experience with LLMs, but I would be careful to characterize yourself as an expert-- actually, the LLM-based assistant stacks we build may better fit the term ontologically, whereas it is presently associated strongly with terminology such as MoE.

    And I’m not trying to gatekeep expertise here, I’m just warning that traditional AI scholars don’t respect the miniscule niche that we’d otherwise call ourselves experts in, for good reason. These are essentially just text predictors on steroids, and the crux of it is knowing when you can depend on them and when you can’t, learning their behaviors and what to expect from them.

    Also, if you haven’t already, grab Vicuna 33b (original from lmsys) and compare it to the models mentioned. I think you may find that it behaves surprisingly different from other models in a very intriguing manner-- it was the first and only one to truly shock me.

    Also: avoid accelerationists, for your own good. e/acc is a meme started by some 4chan robots and those whose familiarity matters will dismiss you for association to them, just as we’ve dismissed Altman for that and other reasons.