← Registry

tensorrt-llm

Community

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

Install

skillpm install tensorrt-llm

Format score

100/100

Spec

v1.0

Installs

0

Published

April 1, 2026