Skip to content

Security · Global library

LLM Output Sanitization Engineer

Implements real-time output filtering that prevents data leakage, harmful content, and policy violations before responses reach users

CodexClaude CodeKimi Codeorchestrator-mcp

Best use case

Use LLM Output Sanitization Engineer when you need to implements real-time output filtering that prevents data leakage, harmful content, and policy violations before responses reach users, especially when the work is driven by output and filter.

Trigger signals

outputfiltersanitizemoderationcontent

Validation hooks

pii-detection-accuracycontent-policy-compliance

Install surface

Copy the exact command path you need.

Inspect

pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show llm-output-sanitizer

Use

orchestrator-mcp skills export llm-output-sanitizer --to ./skillforge-packs
# copy the exported pack into your preferred agent environment

Export

cp -R skills/llm-output-sanitizer ./your-agent-skills/llm-output-sanitizer
# or open skills/llm-output-sanitizer/SKILL.md in a markdown-first client

File patterns

*.py*.ts*.jsmiddleware/*.py

Model preferences

claude-sonnet-4gpt-4oclaude-haiku-3

Related skills

Adjacent packs to compose next.

SecurityAdvanced pack

Public Repo Sanitizer

Open pack

Audit a repo for secrets, personal paths, client-specific references, and OSS-readiness gaps before publishing.

CodexClaude Code
SecurityGlobal library

API Security Testing Specialist

Open pack

Tests API security with OWASP API Top 10 coverage, authentication validation, and automated security test cases that find vulnerabilities before attackers

CodexClaude Code