QuentinCallaghan@sopuli.xyzM to Political Memes@lemmy.caEnglish · 1 day agoMeanwhile at DeepSeeksopuli.xyzimagemessage-square75fedilinkarrow-up1767arrow-down113
arrow-up1754arrow-down1imageMeanwhile at DeepSeeksopuli.xyzQuentinCallaghan@sopuli.xyzM to Political Memes@lemmy.caEnglish · 1 day agomessage-square75fedilink
minus-square474D@lemmy.worldlinkfedilinkarrow-up2arrow-down3·20 hours agoYou can do it in LM Studio in like 5 clicks, I’m currently using it.
minus-squareAtHeartEngineer@lemmy.worldlinkfedilinkEnglisharrow-up4arrow-down1·18 hours agoRunning an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
minus-square474D@lemmy.worldlinkfedilinkarrow-up1·4 hours agoI mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power
minus-squareAtHeartEngineer@lemmy.worldlinkfedilinkEnglisharrow-up1·2 hours agoIn both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face
You can do it in LM Studio in like 5 clicks, I’m currently using it.
Running an uncensored deepseek model that doesn’t perform significantly worse than the regular deepseek models? I know how to dl and run models, I haven’t seen an uncensored deepseek model that performs as well as the baseline deepseek model
I mean obviously you need to run a lower parameter model locally, that’s not a fault of the model, it’s just not having the same computational power
In both cases I was talking about local models, deepseek-r1 32b parameter vs an equivalent that is uncensored from hugging face