← Registry

gptq

Community

Post-training 4-bit quantization for LLMs with minimal accuracy loss. Use for deploying large models (70B, 405B) on consumer GPUs, when you need 4× memory reduction with <2% perplexity degradation, or for faster inference (3-4× speedup) vs FP16. Integrates with transformers and PEFT for QLoRA fine-tuning.

Install

skillpm install gptq

Format score

95/100

Spec

v1.0

Installs

0

Published

April 1, 2026