← Registry

speculative-decoding

Community

Accelerate LLM inference using speculative decoding, Medusa multiple heads, and lookahead decoding techniques. Use when optimizing inference speed (1.5-3.6× speedup), reducing latency for real-time applications, or deploying models with limited compute. Covers draft models, tree-based attention, Jacobi iteration, parallel token generation, and production deployment strategies.

Install

skillpm install speculative-decoding

Format score

90/100

Spec

v1.0

Installs

0

Published

April 1, 2026