Skip to content

AI/ML · Global library

LLM Model Server Architect

Design and implement production-grade LLM serving infrastructure with optimal throughput, latency, and cost efficiency

CodexClaude CodeKimi Codeorchestrator-mcp

Best use case

Use LLM Model Server Architect when you need to design and implement production-grade LLM serving infrastructure with optimal throughput, latency, and cost efficiency, especially when the work is driven by model serving and LLM server.

Trigger signals

model servingLLM serverinference APIvLLMTGImodel deployment

Validation hooks

latency-checkthroughput-validation

Install surface

Copy the exact command path you need.

Inspect

pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show llm-model-server-architect

Use

orchestrator-mcp skills export llm-model-server-architect --to ./skillforge-packs
# copy the exported pack into your preferred agent environment

Export

cp -R skills/llm-model-server-architect ./your-agent-skills/llm-model-server-architect
# or open skills/llm-model-server-architect/SKILL.md in a markdown-first client

File patterns

*.py*.yamlDockerfileserving/*.py

Model preferences

claude-opus-4gpt-4oclaude-haiku-3

Related skills

Adjacent packs to compose next.

AI/MLGlobal library

Agent Lifecycle Manager

Open pack

Manage complete agent lifecycles from initialization through graceful shutdown with health monitoring, scaling, and resource optimization

CodexClaude Code
AI/MLGlobal library

Agent Memory Designer

Open pack

Design short-term, long-term, and episodic memory layers for agents without turning retrieval into an unbounded context leak.

CodexClaude Code