← Registry
gguf-quantization
CommunityGGUF format and llama.cpp quantization for efficient CPU/GPU inference. Use when deploying models on consumer hardware, Apple Silicon, or when needing flexible quantization from 2-8 bit without GPU requirements.
Install
skillpm install gguf-quantizationTags