Skip to content

IoT · Global library

Edge Model Optimization & Quantization

Optimize ML models for edge deployment with quantization, pruning, and hardware acceleration

CodexClaude CodeKimi Codeorchestrator-mcp

Best use case

Use Edge Model Optimization & Quantization when you need to optimize ML models for edge deployment with quantization, pruning, and hardware acceleration, especially when the work is driven by quantization and pruning.

Trigger signals

quantizationpruningtfliteonnxedgeoptimization

Validation hooks

accuracy-checklatency-target

Install surface

Copy the exact command path you need.

Inspect

pip install "orchestrator-mcp[dashboard]"
orchestrator-mcp skills show edge-model-optimization-quantization

Use

orchestrator-mcp skills export edge-model-optimization-quantization --to ./skillforge-packs
# copy the exported pack into your preferred agent environment

Export

cp -R skills/edge-model-optimization-quantization ./your-agent-skills/edge-model-optimization-quantization
# or open skills/edge-model-optimization-quantization/SKILL.md in a markdown-first client

File patterns

*quantize*.{py,js}*optimize*.{py}*tflite*.{py}*onnx*.{py}

Model preferences

claude-sonnet-4gpt-4oclaude-haiku

Related skills

Adjacent packs to compose next.