KEMBAR78
vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (#14485) · ggml-org/llama.cpp@6491d6e · GitHub
Skip to content