This is Llama 2 13b with some additional attention heads from original-flavor Llama 33b frankensteined on.
Fine-tuned on ~10M tokens from RedPajama to settle in the transplants a little.
Not intended for use as-is - this model is meant to serve as a base for further tuning, hopefully with a greater capacity for learning than 13b.
Some mad scientist action, I love it.
it apparently has larger implications than we might think, there was a big discussion in thebloke’s discord