← Registry

serving-llms-vllm

Community

Serves LLMs with high throughput using vLLM's PagedAttention and continuous batching. Use when deploying production LLM APIs, optimizing inference latency/throughput, or serving models with limited GPU memory. Supports OpenAI-compatible endpoints, quantization (GPTQ/AWQ/FP8), and tensor parallelism.

Install

skillpm install serving-llms-vllm

Format score

100/100

Spec

v1.0

Installs

0

Published

April 1, 2026