Building Models with SensiML Piccolo AI

Simple but Powerful AutoML for Edge AI

Piccolo AI is structured around a five-step workflow for constructing working edge IoT inference models tailored to your specific application’s training dataset. These steps are summarized below.


Piccolo AI creates sensor-driven pattern recognition models using labeled time-series training data you supply. This data is imported into the tool using either .CSV files or .WAV audio files, with labeling handled in the Data Manager tab.

For enhanced dataset creation and management productivity, SensiML also offers Data Studio, a paid ML DataOps tool that complements Piccolo AI with features like streaming data capture direct from your embedded device, video annotation, multi-user collaboration, automated labeling, and much more.


After establishing a labeled dataset, Piccolo AI’s Query feature is used to define a subset of available data, sensors, and metadata for training the model. This provides the flexibility needed to create multiple model variants based on specific factors in the overall dataset as well as for partitioning the dataset between training and testing data.


Piccolo AI provides a powerful array of options for converting labeled sensor data into functioning ML models. The most basic approach is utilizing one of the pre-defined Pipeline Templates for common use cases. With these templates, the key model pipeline parameters have already been selected and optimized, and all that is required is to run them against your supplied dataset.

For use cases not already covered by a pre-defined template, you can create your own pipeline from scratch and optionally create a new template based on your custom pipeline configuration.

All Piccolo AI models provide a comprehensive and customizable pipeline for processing raw sensor data into feature transformed, classification, or regression output on the edge device.

The intuitive graphical UI shows the processing stages and provides the means to add/remove steps and configure the parameters for each stage in the pipeline. Helpful hints on the impact of each parameter provide useful guidance for expert users and novices alike.


Once you have completed building a model, Piccolo AI makes it easy to assess the model’s performance. Model visualization, analysis, and testing can be performed using a variety of tools before ever needing to integrate and test firmware empirically on the edge device itself. This saves valuable time and provides quick insight into model behavior and areas for improvement where necessary.

At this stage, partitioned test data can be rapidly analyzed using automated test features and new data imported as desired to assess model generalization.


The final step in the Piccolo AI workflow is generating appropriate and suitably sized firmware code for your desired target device.

Piccolo AI is unique in its ability to support a broad array of MCUs, ML-optimized SoCs, and a variety of architectures. Simply choose amongst the supported list of platforms and then define your desired code generation configuration parameters. Piccolo AI will then generate a .ZIP archive containing your customized ML inference model for your chosen device in either binary, linkable library, or C source code format as desired.

The Code Generation step also provides code profiling estimations with statistics on SRAM, stack, and flash memory requirements as well as processing latency for model execution.