@[email protected] to [email protected]English • 2 months agoOffline version of Chat GPTlemmy.mlimagemessage-square27fedilinkarrow-up1491arrow-down17
arrow-up1484arrow-down1imageOffline version of Chat GPTlemmy.ml@[email protected] to [email protected]English • 2 months agomessage-square27fedilink
minus-squareIgnotumlinkfedilink8•2 months ago70b model taking 1.5GB? So 0.02 bit per parameter? Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
minus-squareNoiseColor linkfedilink9•2 months agoAh yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
minus-square@[email protected]linkfedilink7•2 months agoLess than half of a BDXL though! The dream still breathes
minus-square@[email protected]linkfedilink5•2 months agoFor some reason, triple layer writable blu-ray exists. 100GB each https://www.verbatim.com/prod/optical-media/blu-ray/bd-r-xl-tl/bd-r-xl-tl/
70b model taking 1.5GB? So 0.02 bit per parameter?
Are you sure you’re not thinking of a heavily quantised and compressed 7b model or something? Ollama llama3 70b is 40GB from what i can find, that’s a lot of DVDs
Ah yes probably the Smaler version, your right. Still, a very good llm better than gpt 3
Less than half of a BDXL though! The dream still breathes
For some reason, triple layer writable blu-ray exists. 100GB each
https://www.verbatim.com/prod/optical-media/blu-ray/bd-r-xl-tl/bd-r-xl-tl/