Skip to content

AI/ML · Global library

LLM Caching Strategist

Design multi-layer caching strategies for LLM inference with semantic cache, prompt cache, and response cache optimization

CodexClaude CodeKimi Codeorchestrator-mcp

Best use case

Use LLM Caching Strategist when you need to design multi-layer caching strategies for LLM inference with semantic cache, prompt cache, and response cache optimization, especially when the work is driven by semantic cache and prompt cache.

Trigger signals

semantic cacheprompt cacheKV cacheresponse cacheembedding cachecache invalidation

Validation hooks

hit-rate-checkinvalidation-test

Install surface

Copy the exact command path you need.

Inspect

pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show llm-caching-strategist

Use

orchestrator-mcp skills export llm-caching-strategist --to ./skillforge-packs
# copy the exported pack into your preferred agent environment

Export

cp -R skills/llm-caching-strategist ./your-agent-skills/llm-caching-strategist
# or open skills/llm-caching-strategist/SKILL.md in a markdown-first client

File patterns

*.pycache/*.pyredis*.py

Model preferences

claude-sonnet-4gpt-4oclaude-haiku-3

Related skills

Adjacent packs to compose next.

AI/MLGlobal library

Agent Lifecycle Manager

Open pack

Manage complete agent lifecycles from initialization through graceful shutdown with health monitoring, scaling, and resource optimization

CodexClaude Code
AI/MLGlobal library

Agent Memory Designer

Open pack

Design short-term, long-term, and episodic memory layers for agents without turning retrieval into an unbounded context leak.

CodexClaude Code