Best use case
Use Edge AI Model Deployment & Serving when you need to deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring, especially when the work is driven by deployment and serving.
IoT · Global library
Deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring
Best use case
Use Edge AI Model Deployment & Serving when you need to deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring, especially when the work is driven by deployment and serving.
Trigger signals
Validation hooks
Install surface
Inspect
pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show edge-ai-model-deployment-servingUse
orchestrator-mcp skills export edge-ai-model-deployment-serving --to ./skillforge-packs
# copy the exported pack into your preferred agent environmentExport
cp -R skills/edge-ai-model-deployment-serving ./your-agent-skills/edge-ai-model-deployment-serving
# or open skills/edge-ai-model-deployment-serving/SKILL.md in a markdown-first clientFile patterns
Model preferences
Related skills
Optimize ML models for edge deployment with quantization, pruning, and hardware acceleration
Train ML models collaboratively across edge devices without centralizing sensitive data
Transform raw IoT data into actionable insights with real-time dashboards and predictive analytics