← Registry

ai-guardrails

Community

Implement safety guardrails for AI systems — content filtering, prompt injection detection, output validation, bias mitigation, and responsible AI practices. Use when tasks involve adding safety layers to LLM applications, detecting prompt injection attacks, filtering harmful content, implementing rate limiting for AI APIs, validating LLM outputs against schemas, building moderation pipelines, or ensuring AI systems comply with safety policies.

Install

skillpm install ai-guardrails

Format score

100/100

Spec

v1.0

Installs

0

Published

April 1, 2026