Skip to content

IoT · Global library

Edge AI Model Deployment & Serving

Deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring

CodexClaude CodeKimi Codeorchestrator-mcp

Best use case

Use Edge AI Model Deployment & Serving when you need to deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring, especially when the work is driven by deployment and serving.

Trigger signals

deploymentservinginferenceedgeauto-scalingab testing

Validation hooks

health-checksscaling-test

Install surface

Copy the exact command path you need.

Inspect

pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show edge-ai-model-deployment-serving

Use

orchestrator-mcp skills export edge-ai-model-deployment-serving --to ./skillforge-packs
# copy the exported pack into your preferred agent environment

Export

cp -R skills/edge-ai-model-deployment-serving ./your-agent-skills/edge-ai-model-deployment-serving
# or open skills/edge-ai-model-deployment-serving/SKILL.md in a markdown-first client

File patterns

*serving*.{py,yaml}*deployment*.{py,yaml}*inference*.{py,cpp}

Model preferences

claude-sonnet-4gpt-4oclaude-haiku

Related skills

Adjacent packs to compose next.