• wagesj45@kbin.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I have a feeling that this is going to go similarly to Stable Diffusion’s big 2.0 flop. SD put its limits in through training data. Meta put in its limits via terms and conditions. The end result for both will still be that the community gravitates toward what is usable with the most freedom attached to it. The most annoying part of the TOS is that you can’t use the output to improve other models.

    Fuck you Meta, I wanna make a zillion baby specialist models.

    • rufus@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Well, i’ve had other arguments about OpenAI prohibiting use to improve other models… I’m not sure. My concept of what’s right and what is wrong kind of contradicts Meta or OpenAI just using copyrighted content to train their models and then claiming copyright and banning me from using that for the same purpose.

      • wagesj45@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Good point. I think I’ll do whatever I want with it and just keep my trap shut. Good luck proving anything Zuck.

  • Naked_Yoga
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I used it and was not impressed… I found Wizard LM to be far superior.

    Also, I agree with @wagesj45 up there about training other models… but how would they detect that you’re training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.

  • noneabove1182M
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    People may not love the model or its outputs, but it’s hard to deny the impact to the open-source community that releases like this bring, such a positive bonus and really happy they’re continuing