hohoho@lemmy.worldtoLocalLLaMA@sh.itjust.works•Mistral AI just dropped their new model, Mistral Large 2English
4·
4 months agoThe general rule of thumb that I’ve heard is that you need 1GB of memory for ever 1B parameters. In practice however I’ve found this to not be the case. For instance on a GH200 system I’m able to run Llama3 70b in about 50GB of memory. Llama3.1 405b on the other hand uses +90GB of GPU memory and spills over to using about another 100GB of system memory… but runs like a dog at 2 tokens per second. I expect inference costs will come down over time but for now would recommend Lambda Labs if you don’t have the need for a GPU workstation.
Did you mount the partition to /mnt/sysimage before attempting to chroot it? You can use the ‘mount’ command to see what’s mounted where.