TheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 17 days agoQwen3.6 27B releasedhuggingface.coexternal-linkmessage-square23linkfedilinkarrow-up162arrow-down11file-text
arrow-up161arrow-down1external-linkQwen3.6 27B releasedhuggingface.coTheCornCollector@piefed.zip to LocalLLaMA@sh.itjust.worksEnglish · 17 days agomessage-square23linkfedilinkfile-text
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up2·16 days agoI can run models locally super easy in the CLI with a tool called ollama
minus-squarevenusaur@lemmy.worldlinkfedilinkEnglisharrow-up1·15 days agoCool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
minus-squarethedeadwalking4242@lemmy.worldlinkfedilinkEnglisharrow-up2·15 days agoI’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram. Gemma4 will probably be a good model to try on your hardware
I can run models locally super easy in the CLI with a tool called ollama
Cool I’ve heard of it but I know there are a lot of variable. What model and size are you running with what hardware?
I’ve only ran super small models. I have a cheap gaming laptop with a Nvidia 3060 with like 8gb of vram.
Gemma4 will probably be a good model to try on your hardware
Thanks!