They still consider it a beta but there we go! It’s happening :D

  • @[email protected]
    link
    fedilink
    English
    31 year ago

    Is there any reason why support for loading both formats cannot be included within GGML/llama.cpp directly? As I understand it, the new format is basically the same as the old format but with extra metadata around the outside, and I don’t see any reason why adding support for the new format necessitates removing support for the old format as the way that the actual model weightings is stored is not substantially different (if at all?).

    This would allow people to continue loading their existing models without needing to convert them, or redownload them after waiting for someone else to convert them. The old format could be deprecated and eventually removed in a later release once people have had time to convert or once it becomes inconvenient to continue supporting it.

    • @Kerfuffle
      link
      English
      31 year ago

      Is there any reason why support for loading both formats cannot be included within GGML/llama.cpp directly?

      It could be (and I bet koboldcpp and maybe other projects will take that route). There absolutely is a disadvantage to dragging around a lot of legacy stuff for compatibility. llama.cpp/ggml’s approach has pretty much always been to favor rapid development over compatibility.

      As I understand it, the new format is basically the same as the old format

      I’m not sure that’s really accurate. There are significant differences in how the model vocabulary is handled, for instance.

      Even if it was true right now, in the very first version of GGUF that is merged it’ll likely be less true as GGUF evolves and the stuff it enables starts getting used more. Having to maintain compatibility with the GGML stuff would make iterating on GGUF and adding new features more difficult.