Piccolo AI FAQs

Quick answers to the most common questions about SensiML’s
Free Open-Source AutoML Solution for Edge AI

Piccolo AI is an open-source project developed by SensiML that provides tools and libraries for implementing AI algorithms on resource-constrained devices. It extends SensiML’s mission of enabling intelligent IoT solutions by offering a community-driven platform for developers.

You can start by visiting the Piccolo AI repository at https://github.com/sensiml/piccolo, where you’ll find installation guides, documentation, and sample projects to help you set up and begin using Piccolo AI on your devices.

Piccolo AI supports a variety of popular microcontrollers and development boards commonly used in IoT projects. These include popular MCUs based on the arm Cortex-M architecture, Espressif ESP32 based on Tensilica, x86 microprocessors, Atmel AVR, PIC, and RISC-V. A detailed list of compatible hardware is available on the project site.

Yes, Piccolo AI is free to use. It is released under the GNU Affero General Public License (AGPLv3) an open-source license, allowing you to use, modify, and distribute the software in compliance with the license terms specified on the project site.

Piccolo AI’s open-source license may permit commercial use. However, you should review the specific licensing terms provided on the project site to ensure compliance with all requirements for commercial applications.

Commercial users desiring proprietary license terms and full enterprise-level support can license SensiML Analytics Studio, the commercial variant of Piccolo AI that allows use without copyleft license obligations and provides premium features and various direct support options. More information on SensiML Analytics Studio can be found at sensiml.com or by contacting info@sensiml.com

Contributions are welcome! You can contribute by submitting pull requests, reporting issues, or participating in discussions on the project’s GitHub repository. Contribution guidelines are available to help you get started.

Yes, comprehensive documentation and tutorials are available on the Piccolo AI project site. These resources include getting-started guides, API references, and example projects to help you effectively use the platform.

Piccolo AI primarily supports programming languages commonly used in embedded systems development, such as C and C++. Specific language support details are provided in the documentation.

Piccolo AI is more than an ML framework. Instead, it’s a complete workflow toolkit that automates much of the complexity and effort required in using common ML frameworks like TensorFlow Lite or NNoM. Piccolo AI offers a unique combination of ease-of-use, flexibility, and efficiency tailored for resource-constrained devices. It emphasizes a community-driven approach and seamless integration with SensiML’s ecosystem, setting it apart from other tools.

Support is available through the Piccolo AI community forums, mailing lists, and issue trackers on the project’s website and GitHub repository. You can engage with other users and contributors to get help and share insights.

No, Piccolo AI is a web-server application that runs locally on your own hardware, not in the cloud. SensiML’s Analytics Studio is available for those who prefer a turn-key managed cloud SaaS service and does run in the cloud. For more information on SensiML Analytics Studio, please refer to https://sensiml.com.

Determining the exact amount of data required to achieve a desired level of accuracy in a machine learning model is challenging, as it depends on the specific use case and the variability of influencing factors. Each application has unique contributing factors that affect model outcomes and the degree of data variance across those factors.

The role of the domain expert is crucial. They should:

  • Identify all potential influencing factors.
  • Rank these factors by their expected impact on the model’s performance.
  • Decide which factors can be controlled or eliminated outside of the model.
  • Develop a reasonable testing methodology based on these considerations.

We recommend an iterative approach:

  1. Start with a Small Dataset: Begin with a modest amount of data to build an initial model. This helps gain insights into which factors contribute most to model errors.
  2. Analyze and Adjust: Use the initial model to understand errors and identify influential factors.
  3. Expand Data Collection: Based on insights gained, collect additional data focusing on the most impactful factors. Conduct this in stages, alternating between data collection, analysis, and model refinement.

Initial model development can sometimes be performed with a small number of samples—perhaps 50–100 per class—to gain preliminary insights. However, the specific amount can vary widely depending on the complexity of the problem and the desired accuracy. Generally, more data leads to better-performing and more reliable models.