• deranger
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    2 days ago

    I thought the innovative part was using more efficient code, not what it’s trained on.

      • Sanctus@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        Yeah the original comment in this chain more describes US Telcos and shit, not this particular instance.

    • Fungah@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      That’s what they said basically.

      Like. You can compile better or more diverse datasets to train a model on. But you can also have better code training on the same dataset.

      The model is what the code poops out after its eaten the dataset I haven’t read the paper so no idea if the better training had to do with some super unique spin on their dataset but I’m assuming its better code.