Welcome to micromind’s documentation!
This is the official repository of micromind, a toolkit that aims to bridge two communities: artificial intelligence and embedded systems. micromind is based on PyTorch and provides exportability for the supported models in ONNX, Intel OpenVINO, and TFLite.
Smooth flow from research to deployment;
Support for multimedia analytics recipes (image classification, sound event detection, etc);
Detailed API documentation;
Tutorials for embedded deployment.
First of all, install Python 3.8 or later. Open a terminal and run:
pip install micromind
for the basic install. To install micromind with the full exportability features, run
pip install micromind[conversion]
If you want to launch a simple training on an image classification model, you just need to define a class that extends MicroMind, defining the modules you want to use, such as a PhiNet, the forward method of the model and the way in which to calculate your loss function. micromind takes care of the rest for you.
class ImageClassification(MicroMind): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.modules["classifier"] = PhiNet( (3, 32, 32), include_top=True, num_classes=10 ) def forward(self, batch): return self.modules["classifier"](batch) def compute_loss(self, pred, batch): return nn.CrossEntropyLoss()(pred, batch)
Afterwards, you can export the model in the format you like best between ONNX, TFLite and OpenVINO, just run this simple code:
m = ImageClassification() m.export("output_onnx", "onnx", (3, 32, 32))
Here is the link to the Python file inside our repository that illustrates how to use the MicroMind class.