Best use case
Use Edge Model Optimization & Quantization when you need to optimize ML models for edge deployment with quantization, pruning, and hardware acceleration, especially when the work is driven by quantization and pruning.
IoT · Global library
Optimize ML models for edge deployment with quantization, pruning, and hardware acceleration
Best use case
Use Edge Model Optimization & Quantization when you need to optimize ML models for edge deployment with quantization, pruning, and hardware acceleration, especially when the work is driven by quantization and pruning.
Trigger signals
Validation hooks
Install surface
Inspect
pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show edge-model-optimization-quantizationUse
orchestrator-mcp skills export edge-model-optimization-quantization --to ./skillforge-packs
# copy the exported pack into your preferred agent environmentExport
cp -R skills/edge-model-optimization-quantization ./your-agent-skills/edge-model-optimization-quantization
# or open skills/edge-model-optimization-quantization/SKILL.md in a markdown-first clientFile patterns
Model preferences
Related skills
Deploy and serve ML models at the edge with auto-scaling, A/B testing, and monitoring
Train ML models collaboratively across edge devices without centralizing sensitive data
Transform raw IoT data into actionable insights with real-time dashboards and predictive analytics