Those are OpenCL platform and device identifiers, you can use clinfo to find out which numbers are what on your system.
Also note that if you’re building kobold.cpp yourself, you need to build with LLAMA_CLBLAST=1 for OpenCL support to exist in the first place. Or LLAMA_CUBLAS for CUDA.
What’s the problem you’re having with kobold? It doesn’t really require any setup. Download the exe, click on it, select model in the window, click launch. The webui should open in your default browser.
Small update, take what I said about the breakage at 6000 tokens with a pinch of salt, testing is complicated by something somewhere breaking in a way that persists through generations and even kobold.cpp restarts… Must be some driver issue with CUDA because it takes a PC reboot to resolve, then the exact same generation goes from gibberish to correct.
I can recommend kobold, it’s a lot simpler to set up than ooba and usually runs faster too.
Not sure what happened to this comment… Anyway, ooba (text-generation-webui) works with AMD on Linux but ROCm is super jank at the best of times and 6700XT is not officially supported so it might be hopeless.
llama.cpp has some GPU acceleration support on AMD in CLBlast mode, if you aren’t already using it, might be worth trying.
deleted by creator
That’s what llama.cpp and kobold.cpp do, the KV cache is the last thing that gets offloaded so you can offload weights and keep the cache in RAM. Although neither support SuperHOT right now.
MQA models like Falcon-40B or MPT are going to be better for large context lengths. They have a tiny KV cache so even blown up 16x it’s not a problem.
Unfortunately there’s just no way. KV cache size scales with the square of context length, so at 8k it’s 16 times larger than at 2k, for 33b that’s over 20GB for the cache alone, without weights or other buffers.
You are supposed to manually set scale to 1.0 and base to 10000 when using llama 2 with 4096 context. The automatic scaling assumes the model was trained for 2048. Though as I say in the OP, that still doesn’t work, at least with this particular fine tune.