← Registry

optimizing-attention-flash

Community

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

Install

skillpm install optimizing-attention-flash

Format score

90/100

Spec

v1.0

Installs

0

Published

April 1, 2026