We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B and 34B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B and 13B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 53% and 55% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.

  • @[email protected]
    link
    fedilink
    English
    410 months ago

    This should be interesting to play with. Does anyone know of any Copilot-like VS Code extensions that provide similar UX but hook into a custom local/remote server? I would love to write my own pipeline and context builder for it, but I haven’t written a VS Code extension before and a starting point would be good.

    I haven’t worked with any of the current code-based open models so I don’t really know how it compares, excited to hear thoughts from others on this.